Compare commits

...

73 Commits

Author SHA1 Message Date
trivernis db01332c15
Fix build errors and update to latest tauri version 4 months ago
trivernis 7047dc7903
Update packages 4 months ago
Julius Riegel 3a25a8b812
Merge pull request #25 from Trivernis/develop
Develop
2 years ago
trivernis d83f211ceb
Increment version
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 5dd8eefdcc
Add job status to maintenance menu
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 6dfefe01c2
Add maintenance menu to repository overview
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 99c224586a
Fix more clippy warnings
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 8256751a6f
Remove vacuum from automatically run commands
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 580c27bbd1
Fix cargo clippy warnings
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 37f7a2c82f
Add linting to check workflow
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 89e9e182dd
Improve ui startup performance
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
Julius Riegel 75bc9821b1
Merge pull request #24 from Trivernis/develop
Develop
2 years ago
trivernis e7f09dd2b5
Update container builds
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 49c41ce127
Fix missing plus and edit icon
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
Julius Riegel af28527f36
Merge pull request #23 from Trivernis/develop
CSP Hotfix
2 years ago
trivernis b4e52b0db0
Increment version
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 146088e747
Fix csp errors
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
Julius Riegel 16cb977e45
Merge pull request #22 from Trivernis/develop
Version 1.0.1
2 years ago
trivernis f554690b88
Change base image of debug containerfile to alpine
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 699f6b67bc
Fix build problems with latest tauri build
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 97846c86fc
Increase retry limit for loading images and thumbnails
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 3c8afd4946
Update UI dependencies
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 78e1f26a8b
Update dependencies
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
Julius Riegel 1047320c7c
Merge pull request #21 from Trivernis/feature/jobs
Feature/jobs
2 years ago
trivernis 0eda0d2c22
Improve job state loading and storing
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis fb869dabb1
Add job for checking the integrity
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis e9dbbd1bd5
Add job handle to jobs to wait for results
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis a87c341867
Add job implementation to generate missing thumbnails
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 22f28a79d1
Update to latest bromine
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 2f11c87395
Add calculate_sizes implementation as dispatchable job
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis a57a6f32c4
Implement periodic job dispatching
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 056166ee60
Integrate worker into main application
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis a145d604d9
Simplify job implementation
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 1735d08c1d
Update api dependencies
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 2b76e35150
Merge branch 'develop' into feature/jobs
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 479aee1551
Update tauri
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
Julius Riegel 4c170ca333
Merge pull request #20 from Trivernis/feature/telemetry
Feature/telemetry
2 years ago
trivernis a11a2f3dc5
Add opt-in performance tracing telemetry
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis aa772ea173
Add tracing layer list and refactor logging implementation
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis a2aef104ee
Move whole main function into an async context
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis be3e9bbce3
Fix migration sql
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 0cb37e1268
Add vacuum job
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 496d720219
Add job scheduler implementation with progress report
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 530fbd7606
Add job trait and state data objects
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 8b342f6aac
Add tables to store job information
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis e5cabd4e9b
Update daemon dependencies
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
Julius Riegel 035b54a502
Create CODE_OF_CONDUCT.md 2 years ago
Julius Riegel 7981303d68
Update issue templates 2 years ago
Julius Riegel 351f74bf14
Merge pull request #18 from Trivernis/develop
Develop
2 years ago
trivernis 18546eaabb
Increment version
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 179dcf0d4e
Delete orphaned tags
TG-109 #ready-for-test

Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis a94cc48a5c
Fix file selection dialog extensions only filtering lowercase
TG-108 #closed

Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 71d4287246
Fix file status not updating visually
TG-107 #closed

Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis f033589fcb
Merge branch 'develop' of github.com:Trivernis/mediarepo into develop 2 years ago
trivernis 79dad3af0b
Fix filter for date values
TG-110 #ready-for-test

Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis c3b8304290
Fix delete sweep deleting status filters
TG-111 #closed

Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 476187a8ad
Fix delete sweep deleting status filters
TG-111 #closed

Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
Julius Riegel 84f36a6875
Merge pull request #17 from Trivernis/develop
(Last?) Release Candidate before release
2 years ago
trivernis 3499ee9392
Increment version
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis ded9eafd36
Fix grid entry size on last row
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis fcbfa25435
Fix color of navbar
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 39978ff34c
Add chart to repository details
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis bf8e70f846
Fix status background on wider grid entries
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 9f37173edc
Update dependencies of daemon
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
Julius Riegel eb2062e410
Merge pull request #16 from Trivernis/develop
Develop
2 years ago
trivernis 093396c16f
Increment version
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 68ef43be12
Code cleanup
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 9c9c861a08
Change repository overview to center align entries
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 77d288d905
Add about dialog
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis bb2a63c703
Move repository overview to separate component
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis ed8ddf4d9f
Update workflows to use new buildscript location
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
trivernis 76a0b13e5b
Fix buildscript
Signed-off-by: trivernis <trivernis@protonmail.com>
2 years ago
Trivernis 6f95538834
Split the build script tasks into separate files
Build scripts are now located in the scripts folder.

Signed-off-by: Trivernis <trivernis@protonmail.com>
2 years ago

@ -1,5 +1,8 @@
# compiled output
out
__pycache__
target
dist
# IDEs and editors
mediarepo-api/.idea
@ -36,4 +39,4 @@ mediarepo-daemon/*.folded
# api
mediarepo-api/.idea
mediarepo-api/target
mediarepo-api/target

@ -0,0 +1,32 @@
---
name: Bug report
about: Create a report to help us improve
title: "[bug] "
labels: bug
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**System (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.

@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: "[feature]"
labels: enhancement
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

@ -91,7 +91,7 @@ jobs:
python-version: '^3.7'
- name: Build
run: python build.py build --daemon --verbose
run: python scripts/build.py daemon --verbose --install-tooling
- name: Upload artifacts
if: ${{ !env.ACT }}
@ -150,7 +150,7 @@ jobs:
DEBIAN_FRONTEND=noninteractive sudo apt-get install libwebkit2gtk-4.0-dev libgtk-3-dev libappindicator3-dev -y
- name: Build project
run: python build.py build --ui --verbose
run: python scripts/build.py ui --verbose --install-tooling
- name: Upload artifacts
if: ${{ !env.ACT }}

@ -36,7 +36,15 @@ jobs:
- name: Check daemon
working-directory: mediarepo-daemon
run: cargo check --no-default-features
run: cargo check
- name: Lint api
working-directory: mediarepo-api
run: cargo clippy -- -D warnings
- name: Lint daemon
working-directory: mediarepo-daemon
run: cargo clippy -- -D warnings
- name: Install UI dependencies
working-directory: mediarepo-ui
@ -47,4 +55,4 @@ jobs:
- name: Lint ui frontend
working-directory: mediarepo-ui
run: yarn lint
run: yarn lint

@ -51,7 +51,7 @@ jobs:
python-version: '^3.7'
- name: Build Daemon
run: python build.py build --daemon --verbose
run: python scripts/build.py daemon --verbose --install-tooling
- uses: vimtor/action-zip@v1
with:
@ -113,7 +113,7 @@ jobs:
DEBIAN_FRONTEND=noninteractive sudo apt-get install libwebkit2gtk-4.0-dev libgtk-3-dev libappindicator3-dev -y
- name: Build project
run: python build.py build --ui --verbose
run: python scripts/build.py ui --verbose --install-tooling
- uses: vimtor/action-zip@v1
with:

1
.gitignore vendored

@ -4,6 +4,7 @@
/out-tsc
/target
/out
__pycache__
# IDEs and editors
mediarepo-api/.idea

@ -0,0 +1,128 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
me@trivernis.net.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.

@ -0,0 +1,45 @@
ARG BASE_IMAGE=docker.io/alpine:latest
FROM ${BASE_IMAGE} AS base
RUN apk update
RUN apk add --no-cache \
build-base \
openssl3-dev \
gtk+3.0-dev \
libappindicator-dev \
patchelf \
librsvg-dev \
curl \
wget \
clang \
nodejs \
npm \
libsoup-dev \
webkit2gtk-dev \
file \
python3 \
bash \
protoc
RUN curl https://sh.rustup.rs -sSf | bash -s -- -y
ENV PATH="/root/.cargo/bin:${PATH}"
RUN rm -rf /var/lib/{cache,log}/ /var/cache
FROM base AS sources
WORKDIR /usr/src
COPY mediarepo-api ./mediarepo-api
COPY mediarepo-daemon ./mediarepo-daemon
COPY mediarepo-ui ./mediarepo-ui
COPY scripts ./scripts
RUN python3 scripts/clean.py
RUN python3 scripts/check.py --install
FROM sources AS build_daemon
WORKDIR /usr/src
RUN python3 scripts/build.py daemon --verbose
RUN mkdir ./test-repo
RUN ./out/mediarepo-daemon --repo ./test-repo init
FROM sources AS build_ui
WORKDIR /usr/src
RUN python3 scripts/build.py ui --verbose --bundles deb

@ -1,40 +0,0 @@
ARG DEBIAN_RELEASE=bullseye
FROM bitnami/minideb:${DEBIAN_RELEASE} AS builder
WORKDIR /usr/src
COPY mediarepo-api ./mediarepo-api
COPY mediarepo-daemon ./mediarepo-daemon
COPY mediarepo-ui ./mediarepo-ui
COPY build.py .
RUN apt-get update
RUN apt-get install -y \
build-essential \
libssl-dev \
libgtk-3-dev \
libappindicator3-0.1-cil-dev \
patchelf \
librsvg2-dev \
curl \
wget \
pkg-config \
libavutil-dev \
libavformat-dev \
libavcodec-dev \
libavfilter-dev \
libavdevice-dev \
clang \
nodejs \
npm \
libsoup2.4-dev \
libwebkit2gtk-4.0-dev \
file \
python
RUN apt remove cmdtest -y
RUN curl https://sh.rustup.rs -sSf | bash -s -- -y
ENV PATH="/root/.cargo/bin:${PATH}"
RUN python3 build.py build

@ -69,30 +69,37 @@ You also need to have a working [python](https://www.python.org/) installation o
After all required dependencies are installed and tools are accessible in the `PATH`, you can build the project like follows:
> Note: You might need to make the `build.py` file executable with `chmod +x build.py`.
Check (and install) required tooling:
```sh
$ ./scripts/check.py --install
```
> Note: this only installs tools that are installable via cargo or npm
All Componens:
```sh
$ ./build.py build --ffmpeg
$ ./scripts/build.py all
```
Daemon only:
```sh
$ ./build.py build --daemon --ffmpeg
$ ./scripts/build.py daemon
```
If you don't want to build with ffmpeg support omit the `--ffmpeg` flag.
UI only:
```sh
$ ./build.py build --ui
$ ./scripts/build.py ui
```
Clean the output directory:
```sh
$ ./scripts/clean.py
```
After building the `out` directory contains all the built binaries and bundles.
### Test Builds
For test builds the `Dockerfile` in this repository can be used. This way no build dependencies need to be installed on the system. The dockerfile doesn't provide any artifacts and can only be used for validation.
For test builds the `Containerfile` in this repository can be used. This way no build dependencies need to be installed on the system. The Containerfile doesn't provide any artifacts and can only be used for validation.
## Usage and Further Information

@ -1,210 +0,0 @@
#!/bin/env python3
import shutil as shut
import os
import subprocess
tauri_cli_version = '1.0.0-rc.5'
build_output = 'out'
verbose = False
ffmpeg = False
windows = os.name == 'nt'
def exec(cmd: str, dir: str = None) -> str:
print('Running: {}'.format(cmd))
child = subprocess.run(cmd, shell=True, cwd=dir)
child.check_returncode()
def check_exec(name: str):
print('Checking {}...'.format(name))
if shut.which(name) is None:
raise Exception('{} not found'.format(name))
exec(name + ' --version')
def check_yarn():
print('Checking yarn...')
if shut.which('yarn') is None:
print('installing yarn...')
npm('install -g yarn')
check_exec('yarn')
exec('yarn --version')
def check_ng():
print('Checking ng...')
if shut.which('ng') is None:
print('installing ng...')
npm('install -g @angular/cli')
check_exec('ng')
exec('ng --version')
def store_artifact(path: str):
print('Storing {}'.format(path))
if os.path.isdir(path):
shut.copytree(path, os.path.join(
build_output, os.path.basename(path)), dirs_exist_ok=True)
else:
shut.copy(path, build_output)
def cargo(cmd: str, dir: str = None):
if verbose:
exec('cargo {} --verbose'.format(cmd), dir)
else:
exec('cargo {}'.format(cmd), dir)
def npm(cmd: str, dir: str = None):
exec('npm {}'.format(cmd), dir)
def yarn(cmd: str, dir: str = None):
exec('yarn {}'.format(cmd), dir)
def build_daemon():
'''Builds daemon'''
cargo('fetch', 'mediarepo-daemon')
if not ffmpeg:
cargo('build --release --frozen --no-default-features', 'mediarepo-daemon')
else:
cargo('build --release --frozen', 'mediarepo-daemon')
if windows:
store_artifact('mediarepo-daemon/target/release/mediarepo-daemon.exe')
else:
store_artifact('mediarepo-daemon/target/release/mediarepo-daemon')
def build_ui():
'''Builds UI'''
cargo('install tauri-cli --version ^{}'.format(tauri_cli_version))
yarn('install', 'mediarepo-ui')
cargo('tauri build', 'mediarepo-ui')
if windows:
store_artifact(
'mediarepo-ui/src-tauri/target/release/mediarepo-ui.exe')
else:
store_artifact('mediarepo-ui/src-tauri/target/release/mediarepo-ui')
store_artifact('mediarepo-ui/src-tauri/target/release/bundle/')
def check_daemon():
'''Checks dependencies for daemon'''
check_exec('clang')
check_exec('cargo')
def check_ui():
'''Checks dependencies for UI'''
if not windows:
check_exec('wget')
check_exec('curl')
check_exec('file')
check_exec('clang')
check_exec('cargo')
check_exec('node')
check_exec('npm')
check_yarn()
check_ng()
def check():
'''Checks dependencies'''
check_daemon()
check_ui()
print('All checks passed')
def create_output_dir():
'''Creates build output directory'''
if not os.path.exists(build_output):
os.mkdir(build_output)
def clean():
'''Removes build output'''
if os.path.exists(build_output):
shut.rmtree(build_output)
print('Cleaned')
def build(daemon=True, ui=True):
'''Builds both daemon and UI'''
clean()
create_output_dir()
if daemon:
check_daemon()
build_daemon()
if ui:
check_ui()
build_ui()
print('Build complete')
def parse_args():
import argparse
parser = argparse.ArgumentParser(description='Build mediarepo')
subparsers = parser.add_subparsers(dest='command')
subparsers.required = True
subparsers.add_parser('check')
build_parser = subparsers.add_parser('build')
build_parser.add_argument(
'--daemon', action='store_true', help='Build daemon')
build_parser.add_argument('--ui', action='store_true', help='Build UI')
build_parser.add_argument(
'--verbose', action='store_true', help='Verbose build')
build_parser.add_argument(
'--output', action='store', help='Build output directory')
build_parser.add_argument(
'--ffmpeg', action='store_true', help='Build with ffmpeg')
subparsers.add_parser('clean')
args = parser.parse_args()
return args
def main():
opts = parse_args()
if opts.command == 'build':
global build_output
build_output = opts.output if opts.output else build_output
global verbose
verbose = opts.verbose
global ffmpeg
ffmpeg = opts.ffmpeg
if opts.daemon:
build(True, False)
elif opts.ui:
build(False, True)
else:
build()
elif opts.command == 'check':
check()
elif opts.command == 'clean':
clean()
if __name__ == '__main__':
main()

@ -0,0 +1,11 @@
[language-server.rust-analyzer]
command = "rust-analyzer"
[language-server.rust-analyzer.config]
inlayHints.bindingModeHints.enable = false
inlayHints.closingBraceHints.minLines = 10
inlayHints.closureReturnTypeHints.enable = "with_block"
inlayHints.discriminantHints.enable = "fieldless"
inlayHints.lifetimeElisionHints.enable = "skip_trivial"
inlayHints.typeHints.hideClosureInitialization = false
cargo.features = "all"

@ -1,28 +1,28 @@
[package]
name = "mediarepo-api"
version = "0.31.0"
version = "0.33.0"
edition = "2018"
license = "gpl-3"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
tracing = "0.1.30"
tracing = "0.1.32"
thiserror = "1.0.30"
async-trait = { version = "0.1.52", optional = true }
async-trait = { version = "0.1.53", optional = true }
parking_lot = { version = "0.12.0", optional = true }
serde_json = { version = "1.0.78", optional = true }
serde_json = { version = "1.0.79", optional = true }
directories = { version = "4.0.1", optional = true }
mime_guess = { version = "2.0.3", optional = true }
mime_guess = { version = "2.0.4", optional = true }
serde_piecewise_default = "0.2.0"
futures = { version = "0.3.21", optional = true }
url = { version = "2.2.2", optional = true }
pathsearch = { version = "0.2.0", optional = true }
[dependencies.bromine]
version = "0.18.1"
version = "0.22.1"
optional = true
features = ["serialize_bincode"]
features = ["serialize_bincode", "encryption_layer"]
[dependencies.serde]
version = "1.0.136"
@ -33,13 +33,13 @@ version = "0.4.19"
features = ["serde"]
[dependencies.tauri]
version = "1.0.0-beta.8"
optional=true
version = "1.5.4"
optional = true
default-features = false
features = []
[dependencies.tokio]
version = "1.16.1"
version = "1.17.0"
optional = true
features = ["sync", "fs", "net", "io-util", "io-std", "time", "rt", "process"]

@ -34,4 +34,10 @@ impl JobApi {
Ok(())
}
/// Checks if a particular job is already running
#[tracing::instrument(level = "debug", skip(self))]
pub async fn is_job_running(&self, job_type: JobType) -> ApiResult<bool> {
self.emit_and_get("is_job_running", job_type, None).await
}
}

@ -1,22 +1,22 @@
pub mod error;
pub mod file;
pub mod job;
pub mod preset;
pub mod protocol;
pub mod repo;
pub mod tag;
pub mod preset;
use crate::client_api::error::{ApiError, ApiResult};
use crate::client_api::file::FileApi;
use crate::client_api::job::JobApi;
use crate::client_api::preset::PresetApi;
use crate::client_api::repo::RepoApi;
use crate::client_api::tag::TagApi;
use crate::types::misc::{check_apis_compatible, get_api_version, InfoResponse};
use async_trait::async_trait;
use bromine::prelude::*;
use bromine::prelude::emit_metadata::EmitMetadata;
use bromine::prelude::*;
use tokio::time::Duration;
use crate::client_api::preset::PresetApi;
#[async_trait]
pub trait IPCApi {

@ -1,6 +1,8 @@
use async_trait::async_trait;
use bromine::error::Result;
use bromine::prelude::encrypted::{EncryptedStream, EncryptionOptions};
use bromine::prelude::IPCResult;
use bromine::protocol::encrypted::EncryptedListener;
use bromine::protocol::*;
use std::io::Error;
use std::net::ToSocketAddrs;
@ -9,12 +11,11 @@ use std::task::{Context, Poll};
use tokio::io::{AsyncRead, AsyncWrite, ReadBuf};
use tokio::net::{TcpListener, TcpStream};
#[derive(Debug)]
pub enum ApiProtocolListener {
#[cfg(unix)]
UnixSocket(tokio::net::UnixListener),
Tcp(TcpListener),
Tcp(EncryptedListener<TcpListener>),
}
unsafe impl Send for ApiProtocolListener {}
@ -25,12 +26,14 @@ impl AsyncStreamProtocolListener for ApiProtocolListener {
type AddressType = String;
type RemoteAddressType = String;
type Stream = ApiProtocolStream;
type ListenerOptions = ();
#[tracing::instrument]
async fn protocol_bind(address: Self::AddressType) -> Result<Self> {
async fn protocol_bind(address: Self::AddressType, _: Self::ListenerOptions) -> Result<Self> {
if let Some(addr) = address.to_socket_addrs().ok().and_then(|mut a| a.next()) {
let listener = TcpListener::bind(addr).await?;
tracing::info!("Connecting via TCP");
let listener =
EncryptedListener::protocol_bind(addr, EncryptionOptions::default()).await?;
tracing::info!("Connecting via encrypted TCP");
Ok(Self::Tcp(listener))
} else {
@ -67,19 +70,18 @@ impl AsyncStreamProtocolListener for ApiProtocolListener {
))
}
ApiProtocolListener::Tcp(listener) => {
let (stream, addr) = listener.accept().await?;
let (stream, addr) = listener.protocol_accept().await?;
Ok((ApiProtocolStream::Tcp(stream), addr.to_string()))
}
}
}
}
#[derive(Debug)]
pub enum ApiProtocolStream {
#[cfg(unix)]
UnixSocket(tokio::net::UnixStream),
Tcp(TcpStream),
Tcp(EncryptedStream<TcpStream>),
}
unsafe impl Send for ApiProtocolStream {}
@ -97,7 +99,7 @@ impl AsyncProtocolStreamSplit for ApiProtocolStream {
(Box::new(read), Box::new(write))
}
ApiProtocolStream::Tcp(stream) => {
let (read, write) = stream.into_split();
let (read, write) = stream.protocol_into_split();
(Box::new(read), Box::new(write))
}
}
@ -107,10 +109,15 @@ impl AsyncProtocolStreamSplit for ApiProtocolStream {
#[async_trait]
impl AsyncProtocolStream for ApiProtocolStream {
type AddressType = String;
type StreamOptions = ();
async fn protocol_connect(address: Self::AddressType) -> IPCResult<Self> {
async fn protocol_connect(
address: Self::AddressType,
_: Self::StreamOptions,
) -> IPCResult<Self> {
if let Some(addr) = address.to_socket_addrs().ok().and_then(|mut a| a.next()) {
let stream = TcpStream::connect(addr).await?;
let stream =
EncryptedStream::protocol_connect(addr, EncryptionOptions::default()).await?;
Ok(Self::Tcp(stream))
} else {
#[cfg(unix)]

@ -2,6 +2,7 @@ use crate::daemon_management::find_daemon_executable;
use crate::tauri_plugin::commands::AppAccess;
use crate::tauri_plugin::error::PluginResult;
use crate::tauri_plugin::settings::save_settings;
use bromine::prelude::encrypted::EncryptedListener;
use bromine::prelude::{IPCError, IPCResult};
use bromine::IPCBuilder;
use std::io::ErrorKind;
@ -53,7 +54,7 @@ pub async fn check_daemon_running(address: String) -> PluginResult<bool> {
async fn try_connect_daemon(address: String) -> IPCResult<()> {
let address = get_socket_address(address)?;
let ctx = IPCBuilder::<TcpListener>::new()
let ctx = IPCBuilder::<EncryptedListener<TcpListener>>::new()
.address(address)
.build_client()
.await?;

@ -9,3 +9,11 @@ pub async fn run_job(api_state: ApiAccess<'_>, job_type: JobType, sync: bool) ->
Ok(())
}
#[tauri::command]
pub async fn is_job_running(api_state: ApiAccess<'_>, job_type: JobType) -> PluginResult<bool> {
let api = api_state.api().await?;
let running = api.job.is_job_running(job_type).await?;
Ok(running)
}

@ -57,7 +57,10 @@ fn once_scheme<R: Runtime>(app: &AppHandle<R>, request: &Request) -> Result<Resp
#[tracing::instrument(level = "debug", skip_all)]
async fn content_scheme<R: Runtime>(app: &AppHandle<R>, request: &Request) -> Result<Response> {
let buf_state = app.state::<BufferState>();
let hash = request.uri().trim_start_matches("content://");
let hash = request
.uri()
.trim_start_matches("content://")
.trim_end_matches("/");
if let Some(buffer) = buf_state.get_entry(hash) {
tracing::debug!("Fetching content from cache");

@ -75,7 +75,8 @@ impl<R: Runtime> MediarepoPlugin<R> {
get_file_tag_map,
all_sorting_presets,
add_sorting_preset,
delete_sorting_preset
delete_sorting_preset,
is_job_running
]),
}
}

@ -5,8 +5,9 @@ use std::time::{SystemTime, UNIX_EPOCH};
pub fn system_time_to_naive_date_time(system_time: SystemTime) -> NaiveDateTime {
let epoch_duration = system_time.duration_since(UNIX_EPOCH).unwrap();
NaiveDateTime::from_timestamp(
NaiveDateTime::from_timestamp_opt(
epoch_duration.as_secs() as i64,
epoch_duration.subsec_nanos(),
)
.unwrap()
}

File diff suppressed because it is too large Load Diff

@ -1,10 +1,10 @@
[workspace]
members = ["mediarepo-core", "mediarepo-database", "mediarepo-logic", "mediarepo-socket", "."]
default-members = ["mediarepo-core", "mediarepo-database", "mediarepo-logic", "mediarepo-socket", "."]
members = ["mediarepo-core", "mediarepo-database", "mediarepo-logic", "mediarepo-socket", "mediarepo-worker", "."]
default-members = ["mediarepo-core", "mediarepo-database", "mediarepo-logic", "mediarepo-socket", "mediarepo-worker", "."]
[package]
name = "mediarepo-daemon"
version = "1.0.0-rc.2"
version = "1.0.5"
edition = "2018"
license = "gpl-3"
repository = "https://github.com/Trivernis/mediarepo-daemon"
@ -16,17 +16,21 @@ name = "mediarepo-daemon"
path = "src/main.rs"
[dependencies]
tracing = "0.1.30"
tracing = "0.1.33"
toml = "0.5.8"
structopt = "0.3.26"
glob = "0.3.0"
tracing-flame = "0.2.0"
tracing-appender = "0.2.0"
tracing-appender = "0.2.2"
tracing-log = "0.1.2"
rolling-file = "0.1.0"
num-integer = "0.1.44"
console-subscriber = "0.1.1"
log = "0.4.14"
console-subscriber = "0.1.4"
log = "0.4.16"
opentelemetry = { version = "0.17.0", features = ["rt-tokio"] }
opentelemetry-jaeger = { version = "0.16.0", features = ["rt-tokio"] }
tracing-opentelemetry = "0.17.2"
human-panic = "1.0.3"
[dependencies.mediarepo-core]
path = "./mediarepo-core"
@ -37,14 +41,13 @@ path = "mediarepo-logic"
[dependencies.mediarepo-socket]
path = "./mediarepo-socket"
[dependencies.mediarepo-worker]
path = "./mediarepo-worker"
[dependencies.tokio]
version = "1.16.1"
version = "1.21.2"
features = ["macros", "rt-multi-thread", "io-std", "io-util"]
[dependencies.tracing-subscriber]
version= "0.3.8"
version = "0.3.11"
features = ["env-filter", "ansi", "json"]
[features]
default = ["ffmpeg"]
ffmpeg = ["mediarepo-core/ffmpeg", "mediarepo-logic/ffmpeg"]

@ -8,44 +8,39 @@ workspace = ".."
[dependencies]
thiserror = "1.0.30"
multihash = "0.16.1"
multihash = "0.16.2"
multibase = "0.9.1"
base64 = "0.13.0"
toml = "0.5.8"
serde = "1.0.136"
typemap_rev = "0.1.5"
futures = "0.3.21"
itertools = "0.10.3"
glob = "0.3.0"
tracing = "0.1.30"
tracing = "0.1.33"
data-encoding = "2.3.2"
tokio-graceful-shutdown = "0.4.3"
[dependencies.thumbnailer]
version = "0.3.0"
default-features = false
tokio-graceful-shutdown = "0.5.0"
thumbnailer = "0.4.0"
bincode = "1.3.3"
tracing-subscriber = "0.3.11"
trait-bound-typemap = "0.3.3"
[dependencies.sea-orm]
version = "0.6.0"
version = "0.7.1"
default-features = false
[dependencies.sqlx]
version = "0.5.10"
version = "0.5.11"
default-features = false
features = ["migrate"]
[dependencies.tokio]
version = "1.16.1"
version = "1.21.2"
features = ["fs", "io-util", "io-std"]
[dependencies.config]
version = "0.11.0"
version = "0.13.1"
features = ["toml"]
[dependencies.mediarepo-api]
path = "../../mediarepo-api"
features = ["bromine"]
[features]
default = []
ffmpeg = ["thumbnailer/ffmpeg"]

@ -43,6 +43,9 @@ pub enum RepoError {
#[error("the database file is corrupted {0}")]
Corrupted(String),
#[error("bincode de-/serialization failed {0}")]
Bincode(#[from] bincode::Error),
}
#[derive(Error, Debug)]

@ -71,7 +71,7 @@ impl ThumbnailStore {
let name = file_name.to_string_lossy();
let (height, width) = name
.split_once("-")
.split_once('-')
.and_then(|(height, width)| {
Some((height.parse::<u32>().ok()?, width.parse::<u32>().ok()?))
})

@ -1,14 +1,17 @@
pub use bincode;
pub use futures;
pub use itertools;
pub use mediarepo_api;
pub use mediarepo_api::bromine;
pub use thumbnailer;
pub use tokio_graceful_shutdown;
pub use trait_bound_typemap;
pub mod content_descriptor;
pub mod context;
pub mod error;
pub mod fs;
pub mod settings;
pub mod tracing_layer_list;
pub mod type_keys;
pub mod utils;

@ -1,11 +1,15 @@
use serde::{Deserialize, Serialize};
use tracing::Level;
const DEFAULT_TELEMETRY_ENDPOINT: &str = "telemetry.trivernis.net:6831";
#[derive(Clone, Debug, Deserialize, Serialize)]
pub struct LoggingSettings {
pub level: LogLevel,
pub trace_sql: bool,
pub trace_api_calls: bool,
pub telemetry: bool,
pub telemetry_endpoint: String,
}
impl Default for LoggingSettings {
@ -14,6 +18,8 @@ impl Default for LoggingSettings {
level: LogLevel::Info,
trace_sql: false,
trace_api_calls: false,
telemetry: false,
telemetry_endpoint: String::from(DEFAULT_TELEMETRY_ENDPOINT),
}
}
}
@ -28,6 +34,7 @@ pub enum LogLevel {
Trace,
}
#[allow(clippy::from_over_into)]
impl Into<Option<Level>> for LogLevel {
fn into(self) -> Option<Level> {
match self {

@ -1,5 +1,5 @@
use std::fs;
use std::path::PathBuf;
use std::path::{Path, PathBuf};
use config::{Config, FileFormat};
use serde::{Deserialize, Serialize};
@ -24,18 +24,18 @@ pub struct Settings {
}
impl Settings {
pub fn read(root: &PathBuf) -> RepoResult<Self> {
let mut settings = Config::default();
settings
.merge(config::File::from_str(
pub fn read(root: &Path) -> RepoResult<Self> {
let settings = Config::builder()
.add_source(config::File::from_str(
&*Settings::default().to_toml_string()?,
FileFormat::Toml,
))?
.merge(config::File::from(root.join("repo")))?
.merge(config::Environment::with_prefix("MEDIAREPO").separator("."))?;
))
.add_source(config::File::from(root.join("repo")))
.add_source(config::Environment::with_prefix("MEDIAREPO").separator("."))
.build()?;
tracing::debug!("Settings are: {:#?}", settings);
Ok(settings.try_into::<Settings>()?)
Ok(settings.try_deserialize()?)
}
/// Parses settings from a string
@ -44,22 +44,22 @@ impl Settings {
settings_main.server.tcp.enabled = true;
settings_main.server.tcp.port = PortSetting::Range(settings_v1.port_range);
settings_main.server.tcp.listen_address = settings_v1.listen_address;
settings_main.paths.thumbnail_directory = settings_v1.thumbnail_store.into();
settings_main.paths.thumbnail_directory = settings_v1.thumbnail_store;
settings_main.paths.database_directory = PathBuf::from(settings_v1.database_path)
.parent()
.map(|p| p.to_string_lossy().to_string())
.unwrap_or_else(|| String::from("./"));
let mut settings = Config::default();
settings
.merge(config::File::from_str(
let settings = Config::builder()
.add_source(config::File::from_str(
&*settings_main.to_toml_string()?,
FileFormat::Toml,
))?
.merge(config::Environment::with_prefix("MEDIAREPO"))?;
))
.add_source(config::Environment::with_prefix("MEDIAREPO"))
.build()?;
tracing::debug!("Settings are: {:#?}", settings);
Ok(settings.try_into::<Settings>()?)
Ok(settings.try_deserialize()?)
}
/// Converts the settings into a toml string
@ -69,7 +69,7 @@ impl Settings {
Ok(string)
}
pub fn save(&self, root: &PathBuf) -> RepoResult<()> {
pub fn save(&self, root: &Path) -> RepoResult<()> {
let string = toml::to_string_pretty(&self)?;
fs::write(root.join("repo.toml"), string.into_bytes())?;

@ -1,4 +1,4 @@
use std::path::PathBuf;
use std::path::{Path, PathBuf};
use serde::{Deserialize, Serialize};
@ -21,27 +21,27 @@ impl Default for PathSettings {
impl PathSettings {
#[inline]
pub fn database_dir(&self, root: &PathBuf) -> PathBuf {
pub fn database_dir(&self, root: &Path) -> PathBuf {
root.join(&self.database_directory)
}
#[inline]
pub fn files_dir(&self, root: &PathBuf) -> PathBuf {
pub fn files_dir(&self, root: &Path) -> PathBuf {
root.join(&self.files_directory)
}
#[inline]
pub fn thumbs_dir(&self, root: &PathBuf) -> PathBuf {
pub fn thumbs_dir(&self, root: &Path) -> PathBuf {
root.join(&self.thumbnail_directory)
}
#[inline]
pub fn db_file_path(&self, root: &PathBuf) -> PathBuf {
pub fn db_file_path(&self, root: &Path) -> PathBuf {
self.database_dir(root).join("repo.db")
}
#[inline]
pub fn frontend_state_file_path(&self, root: &PathBuf) -> PathBuf {
pub fn frontend_state_file_path(&self, root: &Path) -> PathBuf {
self.database_dir(root).join("frontend-state.json")
}
}

@ -0,0 +1,124 @@
use std::slice::{Iter, IterMut};
use tracing::level_filters::LevelFilter;
use tracing::span::{Attributes, Record};
use tracing::subscriber::Interest;
use tracing::{Event, Id, Metadata, Subscriber};
use tracing_subscriber::Layer;
pub struct DynLayerList<S>(Vec<Box<dyn Layer<S> + Send + Sync + 'static>>);
impl<S> Default for DynLayerList<S> {
fn default() -> Self {
Self(Vec::new())
}
}
impl<S> DynLayerList<S> {
pub fn new() -> Self {
Self::default()
}
pub fn iter(&self) -> Iter<'_, Box<dyn Layer<S> + Send + Sync>> {
self.0.iter()
}
pub fn iter_mut(&mut self) -> IterMut<'_, Box<dyn Layer<S> + Send + Sync>> {
self.0.iter_mut()
}
}
impl<S> DynLayerList<S>
where
S: Subscriber,
{
pub fn add<L: Layer<S> + Send + Sync>(&mut self, layer: L) {
self.0.push(Box::new(layer));
}
}
impl<S> Layer<S> for DynLayerList<S>
where
S: Subscriber,
{
fn on_layer(&mut self, subscriber: &mut S) {
self.iter_mut().for_each(|l| l.on_layer(subscriber));
}
fn register_callsite(&self, metadata: &'static Metadata<'static>) -> Interest {
// Return highest level of interest.
let mut interest = Interest::never();
for layer in &self.0 {
let new_interest = layer.register_callsite(metadata);
if (interest.is_sometimes() && new_interest.is_always())
|| (interest.is_never() && !new_interest.is_never())
{
interest = new_interest;
}
}
interest
}
fn enabled(
&self,
metadata: &Metadata<'_>,
ctx: tracing_subscriber::layer::Context<'_, S>,
) -> bool {
self.iter().any(|l| l.enabled(metadata, ctx.clone()))
}
fn on_new_span(
&self,
attrs: &Attributes<'_>,
id: &Id,
ctx: tracing_subscriber::layer::Context<'_, S>,
) {
self.iter()
.for_each(|l| l.on_new_span(attrs, id, ctx.clone()));
}
fn max_level_hint(&self) -> Option<LevelFilter> {
self.iter().filter_map(|l| l.max_level_hint()).max()
}
fn on_record(
&self,
span: &Id,
values: &Record<'_>,
ctx: tracing_subscriber::layer::Context<'_, S>,
) {
self.iter()
.for_each(|l| l.on_record(span, values, ctx.clone()));
}
fn on_follows_from(
&self,
span: &Id,
follows: &Id,
ctx: tracing_subscriber::layer::Context<'_, S>,
) {
self.iter()
.for_each(|l| l.on_follows_from(span, follows, ctx.clone()));
}
fn on_event(&self, event: &Event<'_>, ctx: tracing_subscriber::layer::Context<'_, S>) {
self.iter().for_each(|l| l.on_event(event, ctx.clone()));
}
fn on_enter(&self, id: &Id, ctx: tracing_subscriber::layer::Context<'_, S>) {
self.iter().for_each(|l| l.on_enter(id, ctx.clone()));
}
fn on_exit(&self, id: &Id, ctx: tracing_subscriber::layer::Context<'_, S>) {
self.iter().for_each(|l| l.on_exit(id, ctx.clone()));
}
fn on_close(&self, id: Id, ctx: tracing_subscriber::layer::Context<'_, S>) {
self.iter()
.for_each(|l| l.on_close(id.clone(), ctx.clone()));
}
fn on_id_change(&self, old: &Id, new: &Id, ctx: tracing_subscriber::layer::Context<'_, S>) {
self.iter()
.for_each(|l| l.on_id_change(old, new, ctx.clone()));
}
}

@ -1,7 +0,0 @@
use async_trait::async_trait;
#[async_trait]
pub trait AsyncTryFrom<T> {
type Error;
fn async_try_from(other: T) -> Result<Self, Self::Error>;
}

@ -3,7 +3,7 @@ use std::path::PathBuf;
use mediarepo_api::types::repo::SizeType;
use tokio_graceful_shutdown::SubsystemHandle;
use typemap_rev::TypeMapKey;
use trait_bound_typemap::TypeMapKey;
use crate::settings::Settings;

@ -1,4 +1,4 @@
use std::path::PathBuf;
use std::path::{Path, PathBuf};
use futures::future;
use tokio::fs::{self, OpenOptions};
@ -16,7 +16,7 @@ pub fn parse_namespace_and_tag(norm_tag: String) -> (Option<String>, String) {
}
/// Parses all tags from a file
pub async fn parse_tags_file(path: PathBuf) -> RepoResult<Vec<(Option<String>, String)>> {
pub async fn parse_tags_file(path: &Path) -> RepoResult<Vec<(Option<String>, String)>> {
let file = OpenOptions::new().read(true).open(path).await?;
let mut lines = BufReader::new(file).lines();
let mut tags = Vec::new();
@ -47,7 +47,7 @@ pub async fn get_folder_size(path: PathBuf) -> RepoResult<u64> {
}
}
}
let futures = all_files.into_iter().map(|f| read_file_size(f));
let futures = all_files.into_iter().map(read_file_size);
let results = future::join_all(futures).await;
let size = results.into_iter().filter_map(|r| r.ok()).sum();

@ -8,16 +8,16 @@ workspace = ".."
[dependencies]
chrono = "0.4.19"
tracing = "0.1.30"
tracing = "0.1.33"
[dependencies.mediarepo-core]
path = "../mediarepo-core"
[dependencies.sqlx]
version = "0.5.10"
version = "0.5.11"
features = ["migrate"]
[dependencies.sea-orm]
version = "0.6.0"
features = ["sqlx-sqlite", "runtime-tokio-native-tls", "macros", "debug-print"]
version = "0.7.1"
features = ["sqlx-sqlite", "runtime-tokio-native-tls", "macros"]
default-features = false

@ -0,0 +1,5 @@
CREATE TABLE job_states (
job_type INTEGER NOT NULL,
value BLOB,
PRIMARY KEY (job_type)
);

@ -0,0 +1,45 @@
use sea_orm::prelude::*;
use sea_orm::TryFromU64;
#[derive(Clone, Debug, PartialEq, DeriveEntityModel)]
#[sea_orm(table_name = "job_states")]
pub struct Model {
#[sea_orm(primary_key)]
pub job_type: JobType,
pub value: Vec<u8>,
}
#[derive(Clone, Copy, Debug, PartialEq, EnumIter, DeriveActiveEnum)]
#[sea_orm(rs_type = "u32", db_type = "Integer")]
pub enum JobType {
#[sea_orm(num_value = 10)]
MigrateCDs,
#[sea_orm(num_value = 20)]
CalculateSizes,
#[sea_orm(num_value = 30)]
GenerateThumbs,
#[sea_orm(num_value = 40)]
CheckIntegrity,
#[sea_orm(num_value = 50)]
Vacuum,
}
impl TryFromU64 for JobType {
fn try_from_u64(n: u64) -> Result<Self, DbErr> {
let value = match n {
10 => Self::MigrateCDs,
20 => Self::CalculateSizes,
30 => Self::GenerateThumbs,
40 => Self::CheckIntegrity,
50 => Self::Vacuum,
_ => return Err(DbErr::Custom(String::from("Invalid job type"))),
};
Ok(value)
}
}
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {}
impl ActiveModelBehavior for ActiveModel {}

@ -3,6 +3,7 @@ pub mod content_descriptor_source;
pub mod content_descriptor_tag;
pub mod file;
pub mod file_metadata;
pub mod job_state;
pub mod namespace;
pub mod sort_key;
pub mod sorting_preset;

@ -13,7 +13,8 @@ pub async fn get_database<S: AsRef<str>>(uri: S) -> RepoDatabaseResult<DatabaseC
migrate(uri.as_ref()).await?;
let mut opt = ConnectOptions::new(uri.as_ref().to_string());
opt.connect_timeout(Duration::from_secs(10))
.idle_timeout(Duration::from_secs(10));
.idle_timeout(Duration::from_secs(10))
.sqlx_logging(false);
let conn = Database::connect(opt).await?;

@ -30,7 +30,7 @@ pub async fn get_all_counts(db: &DatabaseConnection) -> RepoResult<Counts> {
))
.one(db)
.await?
.ok_or(RepoError::from("could not retrieve metadata from database"))?;
.ok_or_else(|| RepoError::from("could not retrieve metadata from database"))?;
Ok(counts)
}

@ -55,7 +55,7 @@ fn vec_to_query_list<D: Display>(input: Vec<D>) -> String {
let mut entries = input
.into_iter()
.fold(String::new(), |acc, val| format!("{}{},", acc, val));
if entries.len() > 0 {
if !entries.is_empty() {
entries.remove(entries.len() - 1);
}

@ -8,12 +8,11 @@ workspace = ".."
[dependencies]
chrono = "0.4.19"
typemap_rev = "0.1.5"
serde = "1.0.136"
mime_guess = "2.0.3"
mime_guess = "2.0.4"
mime = "0.3.16"
tracing = "0.1.30"
async-trait = "0.1.52"
tracing = "0.1.33"
async-trait = "0.1.53"
[dependencies.mediarepo-core]
path = "../mediarepo-core"
@ -22,14 +21,11 @@ path = "../mediarepo-core"
path = "../mediarepo-database"
[dependencies.sea-orm]
version = "0.6.0"
version = "0.7.1"
features = ["runtime-tokio-native-tls", "macros"]
default-features = false
[dependencies.tokio]
version = "1.16.1"
version = "1.21.2"
features = ["fs", "io-std", "io-util"]
[features]
ffmpeg = ["mediarepo-core/ffmpeg"]

@ -93,7 +93,7 @@ impl FileDao {
.all(&self.ctx.db)
.await?
.into_iter()
.map(|m| FileMetadataDto::new(m))
.map(FileMetadataDto::new)
.collect();
Ok(metadata)

@ -22,8 +22,8 @@ impl FileDao {
let trx = self.ctx.db.begin().await?;
let model = file::ActiveModel {
id: Set(update_dto.id),
cd_id: update_dto.cd_id.map(|v| Set(v)).unwrap_or(NotSet),
mime_type: update_dto.mime_type.map(|v| Set(v)).unwrap_or(NotSet),
cd_id: update_dto.cd_id.map(Set).unwrap_or(NotSet),
mime_type: update_dto.mime_type.map(Set).unwrap_or(NotSet),
status: update_dto.status.map(|v| Set(v as i32)).unwrap_or(NotSet),
};
let file_model = model.update(&trx).await?;
@ -62,8 +62,8 @@ impl FileDao {
sizes: I,
) -> RepoResult<Vec<ThumbnailDto>> {
let bytes = self.get_bytes(file.cd()).await?;
let mime_type = mime::Mime::from_str(file.mime_type())
.unwrap_or_else(|_| mime::APPLICATION_OCTET_STREAM);
let mime_type =
mime::Mime::from_str(file.mime_type()).unwrap_or(mime::APPLICATION_OCTET_STREAM);
let thumbnails =
thumbnailer::create_thumbnails(Cursor::new(bytes), mime_type.clone(), sizes)?;
let mut dtos = Vec::new();

@ -3,5 +3,6 @@ use crate::dao_provider;
pub mod generate_missing_thumbnails;
pub mod migrate_content_descriptors;
pub mod sqlite_operations;
pub mod state;
dao_provider!(JobDao);

@ -0,0 +1,58 @@
use crate::dao::job::JobDao;
use crate::dto::{JobStateDto, UpsertJobStateDto};
use mediarepo_core::error::RepoResult;
use mediarepo_database::entities::job_state;
use mediarepo_database::entities::job_state::JobType;
use sea_orm::prelude::*;
use sea_orm::ActiveValue::Set;
use sea_orm::{Condition, TransactionTrait};
impl JobDao {
/// Returns all job states for a given job id
pub async fn state_for_job_type(&self, job_type: JobType) -> RepoResult<Option<JobStateDto>> {
let state = job_state::Entity::find()
.filter(job_state::Column::JobType.eq(job_type))
.one(&self.ctx.db)
.await?
.map(JobStateDto::new);
Ok(state)
}
pub async fn upsert_state(&self, state: UpsertJobStateDto) -> RepoResult<()> {
self.upsert_multiple_states(vec![state]).await
}
pub async fn upsert_multiple_states(&self, states: Vec<UpsertJobStateDto>) -> RepoResult<()> {
let trx = self.ctx.db.begin().await?;
job_state::Entity::delete_many()
.filter(build_state_filters(&states))
.exec(&trx)
.await?;
job_state::Entity::insert_many(build_active_state_models(states))
.exec(&trx)
.await?;
trx.commit().await?;
Ok(())
}
}
fn build_state_filters(states: &[UpsertJobStateDto]) -> Condition {
states
.iter()
.map(|s| Condition::all().add(job_state::Column::JobType.eq(s.job_type)))
.fold(Condition::any(), |acc, cond| acc.add(cond))
}
fn build_active_state_models(states: Vec<UpsertJobStateDto>) -> Vec<job_state::ActiveModel> {
states
.into_iter()
.map(|s| job_state::ActiveModel {
job_type: Set(s.job_type),
value: Set(s.value),
})
.collect()
}

@ -122,7 +122,7 @@ async fn add_keys(
async fn find_sort_keys(
trx: &DatabaseTransaction,
keys: &Vec<AddSortKeyDto>,
keys: &[AddSortKeyDto],
) -> RepoResult<Vec<SortKeyDto>> {
if keys.is_empty() {
return Ok(vec![]);

@ -77,14 +77,12 @@ fn create_cd_tag_map(
)>,
tag_id_map: HashMap<i64, TagDto>,
) -> HashMap<Vec<u8>, Vec<TagDto>> {
let cd_tag_map = tag_cd_entries
tag_cd_entries
.into_iter()
.filter_map(|(t, cd)| Some((cd?, tag_id_map.get(&t.tag_id)?.clone())))
.sorted_by_key(|(cd, _)| cd.id)
.group_by(|(cd, _)| cd.descriptor.to_owned())
.into_iter()
.map(|(key, group)| (key, group.map(|(_, t)| t).collect::<Vec<TagDto>>()))
.collect();
cd_tag_map
.collect::<HashMap<Vec<u8>, Vec<TagDto>>>()
}

@ -45,11 +45,12 @@ fn name_query_to_condition(query: TagByNameQuery) -> Option<Condition> {
let TagByNameQuery { namespace, name } = query;
let mut condition = Condition::all();
#[allow(clippy::question_mark)]
if !name.ends_with('*') {
condition = condition.add(tag::Column::Name.eq(name))
} else if name.len() > 1 {
condition =
condition.add(tag::Column::Name.like(&*format!("{}%", name.trim_end_matches("*"))))
condition.add(tag::Column::Name.like(&*format!("{}%", name.trim_end_matches('*'))))
} else if namespace.is_none() {
return None;
}

@ -1,9 +1,10 @@
use sea_orm::prelude::*;
use sea_orm::sea_query::Query;
use sea_orm::ActiveValue::Set;
use sea_orm::{DatabaseTransaction, TransactionTrait};
use mediarepo_core::error::RepoResult;
use mediarepo_database::entities::content_descriptor_tag;
use mediarepo_database::entities::{content_descriptor_tag, namespace, tag};
use crate::dao::tag::TagDao;
@ -41,11 +42,15 @@ impl TagDao {
#[tracing::instrument(level = "debug", skip(self))]
pub async fn remove_mappings(&self, cd_ids: Vec<i64>, tag_ids: Vec<i64>) -> RepoResult<()> {
let trx = self.ctx.db.begin().await?;
content_descriptor_tag::Entity::delete_many()
.filter(content_descriptor_tag::Column::CdId.is_in(cd_ids))
.filter(content_descriptor_tag::Column::TagId.is_in(tag_ids))
.exec(&self.ctx.db)
.exec(&trx)
.await?;
delete_orphans(&trx).await?;
trx.commit().await?;
Ok(())
}
@ -53,12 +58,12 @@ impl TagDao {
async fn get_existing_mappings(
trx: &DatabaseTransaction,
cd_ids: &Vec<i64>,
tag_ids: &Vec<i64>,
cd_ids: &[i64],
tag_ids: &[i64],
) -> RepoResult<Vec<(i64, i64)>> {
let existing_mappings: Vec<(i64, i64)> = content_descriptor_tag::Entity::find()
.filter(content_descriptor_tag::Column::CdId.is_in(cd_ids.clone()))
.filter(content_descriptor_tag::Column::TagId.is_in(tag_ids.clone()))
.filter(content_descriptor_tag::Column::CdId.is_in(cd_ids.to_vec()))
.filter(content_descriptor_tag::Column::TagId.is_in(tag_ids.to_vec()))
.all(trx)
.await?
.into_iter()
@ -66,3 +71,34 @@ async fn get_existing_mappings(
.collect();
Ok(existing_mappings)
}
/// Deletes orphaned tag entries and namespaces from the database
async fn delete_orphans(trx: &DatabaseTransaction) -> RepoResult<()> {
tag::Entity::delete_many()
.filter(
tag::Column::Id.not_in_subquery(
Query::select()
.column(content_descriptor_tag::Column::TagId)
.from(content_descriptor_tag::Entity)
.group_by_col(content_descriptor_tag::Column::TagId)
.to_owned(),
),
)
.exec(trx)
.await?;
namespace::Entity::delete_many()
.filter(
namespace::Column::Id.not_in_subquery(
Query::select()
.column(tag::Column::NamespaceId)
.from(tag::Entity)
.group_by_col(tag::Column::NamespaceId)
.to_owned(),
),
)
.exec(trx)
.await?;
Ok(())
}

@ -75,7 +75,7 @@ pub struct AddFileDto {
pub name: Option<String>,
}
#[derive(Clone, Debug)]
#[derive(Clone, Debug, Default)]
pub struct UpdateFileDto {
pub id: i64,
pub cd_id: Option<i64>,
@ -83,17 +83,6 @@ pub struct UpdateFileDto {
pub status: Option<FileStatus>,
}
impl Default for UpdateFileDto {
fn default() -> Self {
Self {
id: 0,
cd_id: None,
mime_type: None,
status: None,
}
}
}
#[derive(Copy, Clone, Debug)]
pub enum FileStatus {
Imported = 10,

@ -0,0 +1,31 @@
use mediarepo_database::entities::job_state;
use mediarepo_database::entities::job_state::JobType;
#[derive(Clone, Debug)]
pub struct JobStateDto {
model: job_state::Model,
}
impl JobStateDto {
pub(crate) fn new(model: job_state::Model) -> Self {
Self { model }
}
pub fn job_type(&self) -> JobType {
self.model.job_type
}
pub fn value(&self) -> &[u8] {
&self.model.value
}
pub fn into_value(self) -> Vec<u8> {
self.model.value
}
}
#[derive(Clone, Debug)]
pub struct UpsertJobStateDto {
pub job_type: JobType,
pub value: Vec<u8>,
}

@ -1,5 +1,6 @@
pub use file::*;
pub use file_metadata::*;
pub use job_state::*;
pub use namespace::*;
pub use sorting_preset::*;
pub use tag::*;
@ -7,6 +8,7 @@ pub use thumbnail::*;
mod file;
mod file_metadata;
mod job_state;
mod namespace;
mod sorting_preset;
mod tag;

@ -84,7 +84,7 @@ impl KeyType {
}
pub fn to_number(&self) -> i32 {
self.clone() as i32
*self as i32
}
}

@ -1,7 +1,6 @@
use mediarepo_core::trait_bound_typemap::TypeMapKey;
use std::sync::Arc;
use typemap_rev::TypeMapKey;
use crate::dao::repo::Repo;
pub struct RepoKey;

@ -8,10 +8,10 @@ workspace = ".."
[dependencies]
serde = "1.0.136"
tracing = "0.1.30"
tracing = "0.1.33"
compare = "0.1.0"
port_check = "0.1.5"
rayon = "1.5.1"
rayon = "1.5.2"
[dependencies.mediarepo-core]
path = "../mediarepo-core"
@ -22,9 +22,12 @@ path = "../mediarepo-database"
[dependencies.mediarepo-logic]
path = "../mediarepo-logic"
[dependencies.mediarepo-worker]
path = "../mediarepo-worker"
[dependencies.tokio]
version = "1.16.1"
features = ["net"]
version = "1.21.2"
features = ["net", "rt", "tracing"]
[dependencies.chrono]
version = "0.4.19"

@ -1,29 +1,26 @@
use std::net::SocketAddr;
use std::path::PathBuf;
use std::sync::Arc;
use tokio::net::TcpListener;
use tokio::task::JoinHandle;
use crate::encrypted::EncryptedListener;
use mediarepo_core::bromine::prelude::*;
use mediarepo_core::error::{RepoError, RepoResult};
use mediarepo_core::mediarepo_api::types::misc::InfoResponse;
use mediarepo_core::settings::{PortSetting, Settings};
use mediarepo_core::tokio_graceful_shutdown::SubsystemHandle;
use mediarepo_core::type_keys::{RepoPathKey, SettingsKey, SizeMetadataKey, SubsystemKey};
use mediarepo_logic::dao::repo::Repo;
use mediarepo_logic::type_keys::RepoKey;
use mediarepo_core::trait_bound_typemap::{SendSyncTypeMap, TypeMap};
use mediarepo_core::type_keys::{SizeMetadataKey, SubsystemKey};
mod from_model;
mod namespaces;
mod utils;
#[tracing::instrument(skip(subsystem, settings, repo))]
#[tracing::instrument(skip_all)]
pub fn start_tcp_server(
subsystem: SubsystemHandle,
repo_path: PathBuf,
settings: Settings,
repo: Repo,
shared_data: SendSyncTypeMap,
) -> RepoResult<(String, JoinHandle<()>)> {
let port = match &settings.server.tcp.port {
PortSetting::Fixed(p) => {
@ -33,38 +30,33 @@ pub fn start_tcp_server(
return Err(RepoError::PortUnavailable);
}
}
PortSetting::Range((l, r)) => port_check::free_local_port_in_range(*l, *r)
.ok_or_else(|| RepoError::PortUnavailable)?,
PortSetting::Range((l, r)) => {
port_check::free_local_port_in_range(*l, *r).ok_or(RepoError::PortUnavailable)?
}
};
let ip = settings.server.tcp.listen_address.to_owned();
let address = SocketAddr::new(ip, port);
let address_string = address.to_string();
let join_handle = tokio::task::Builder::new()
.name("mediarepo_tcp::listen")
.spawn(async move {
get_builder::<TcpListener>(address)
.insert::<SubsystemKey>(subsystem)
.insert::<RepoKey>(Arc::new(repo))
.insert::<SettingsKey>(settings)
.insert::<RepoPathKey>(repo_path)
.insert::<SizeMetadataKey>(Default::default())
.build_server()
.await
.expect("Failed to start tcp server")
});
let join_handle = tokio::task::spawn(async move {
get_builder::<EncryptedListener<TcpListener>>(address)
.insert::<SubsystemKey>(subsystem)
.insert_all(shared_data)
.insert::<SizeMetadataKey>(Default::default())
.build_server()
.await
.expect("Failed to start tcp server")
});
Ok((address_string, join_handle))
}
#[cfg(unix)]
#[tracing::instrument(skip(subsystem, settings, repo))]
#[tracing::instrument(skip_all)]
pub fn create_unix_socket(
subsystem: SubsystemHandle,
path: std::path::PathBuf,
repo_path: PathBuf,
settings: Settings,
repo: Repo,
shared_data: SendSyncTypeMap,
) -> RepoResult<JoinHandle<()>> {
use std::fs;
use tokio::net::UnixListener;
@ -72,19 +64,15 @@ pub fn create_unix_socket(
if path.exists() {
fs::remove_file(&path)?;
}
let join_handle = tokio::task::Builder::new()
.name("mediarepo_unix_socket::listen")
.spawn(async move {
get_builder::<UnixListener>(path)
.insert::<SubsystemKey>(subsystem)
.insert::<RepoKey>(Arc::new(repo))
.insert::<SettingsKey>(settings)
.insert::<RepoPathKey>(repo_path)
.insert::<SizeMetadataKey>(Default::default())
.build_server()
.await
.expect("Failed to create unix domain socket");
});
let join_handle = tokio::task::spawn(async move {
get_builder::<UnixListener>(path)
.insert::<SubsystemKey>(subsystem)
.insert_all(shared_data)
.insert::<SizeMetadataKey>(Default::default())
.build_server()
.await
.expect("Failed to create unix domain socket");
});
Ok(join_handle)
}

@ -151,7 +151,7 @@ impl FilesNamespace {
content: bytes,
mime_type: metadata
.mime_type
.unwrap_or(String::from("application/octet-stream")),
.unwrap_or_else(|| String::from("application/octet-stream")),
creation_time: metadata.creation_time,
change_time: metadata.change_time,
name: Some(metadata.name),

@ -69,10 +69,10 @@ fn build_filters_from_expressions(
}
}
};
if filters.len() > 0 {
Some(filters)
} else {
if filters.is_empty() {
None
} else {
Some(filters)
}
})
.collect()
@ -92,7 +92,7 @@ fn map_tag_query_to_filter(
query: TagQuery,
tag_id_map: &HashMap<String, i64>,
) -> Option<FilterProperty> {
if query.tag.ends_with("*") {
if query.tag.ends_with('*') {
map_wildcard_tag_to_filter(query, tag_id_map)
} else {
map_tag_to_filter(query, tag_id_map)
@ -103,7 +103,7 @@ fn map_wildcard_tag_to_filter(
query: TagQuery,
tag_id_map: &HashMap<String, i64>,
) -> Option<FilterProperty> {
let filter_tag = query.tag.trim_end_matches("*");
let filter_tag = query.tag.trim_end_matches('*');
let relevant_ids = tag_id_map
.iter()
.filter_map(|(name, id)| {
@ -115,15 +115,15 @@ fn map_wildcard_tag_to_filter(
})
.collect::<Vec<i64>>();
if relevant_ids.len() > 0 {
if relevant_ids.is_empty() {
None
} else {
let comparator = if query.negate {
IsNot(relevant_ids)
} else {
Is(relevant_ids)
};
Some(FilterProperty::TagWildcardIds(comparator))
} else {
None
}
}

@ -71,7 +71,7 @@ async fn build_sort_context(
mime_type: file.mime_type().to_owned(),
namespaces: cid_nsp
.remove(&file.cd_id())
.unwrap_or(HashMap::with_capacity(0)),
.unwrap_or_else(|| HashMap::with_capacity(0)),
tag_count: cid_tag_counts.remove(&file.cd_id()).unwrap_or(0),
import_time: metadata.import_time().to_owned(),
create_time: metadata.import_time().to_owned(),
@ -176,11 +176,8 @@ fn adjust_for_dir(ordering: Ordering, direction: &SortDirection) -> Ordering {
}
}
fn compare_tag_lists(list_a: &Vec<String>, list_b: &Vec<String>) -> Ordering {
let first_diff = list_a
.into_iter()
.zip(list_b.into_iter())
.find(|(a, b)| *a != *b);
fn compare_tag_lists(list_a: &[String], list_b: &[String]) -> Ordering {
let first_diff = list_a.iter().zip(list_b.iter()).find(|(a, b)| *a != *b);
if let Some(diff) = first_diff {
if let (Some(num_a), Some(num_b)) = (diff.0.parse::<f32>().ok(), diff.1.parse::<f32>().ok())
{

@ -1,11 +1,15 @@
use crate::TypeMap;
use mediarepo_core::bromine::prelude::*;
use mediarepo_core::error::RepoResult;
use mediarepo_core::mediarepo_api::types::jobs::{JobType, RunJobRequest};
use mediarepo_core::mediarepo_api::types::repo::SizeType;
use mediarepo_core::type_keys::SizeMetadataKey;
use mediarepo_logic::dao::DaoProvider;
use mediarepo_core::type_keys::{RepoPathKey, SettingsKey, SizeMetadataKey};
use mediarepo_worker::handle::JobState;
use mediarepo_worker::job_dispatcher::JobDispatcher;
use mediarepo_worker::jobs::{
CalculateSizesJob, CheckIntegrityJob, GenerateMissingThumbsJob, Job, MigrateCDsJob, VacuumJob,
};
use crate::utils::{calculate_size, get_repo_from_context};
use crate::utils::get_job_dispatcher_from_context;
pub struct JobsNamespace;
@ -16,8 +20,9 @@ impl NamespaceProvider for JobsNamespace {
fn register(handler: &mut EventHandler) {
events!(handler,
"run_job" => Self::run_job
)
"run_job" => Self::run_job,
"is_job_running" => Self::is_job_running
);
}
}
@ -25,7 +30,7 @@ impl JobsNamespace {
#[tracing::instrument(skip_all)]
pub async fn run_job(ctx: &Context, event: Event) -> IPCResult<Response> {
let run_request = event.payload::<RunJobRequest>()?;
let job_dao = get_repo_from_context(ctx).await.job();
let dispatcher = get_job_dispatcher_from_context(ctx).await;
if !run_request.sync {
// early response to indicate that the job will be run
@ -33,26 +38,89 @@ impl JobsNamespace {
}
match run_request.job_type {
JobType::MigrateContentDescriptors => job_dao.migrate_content_descriptors().await?,
JobType::MigrateContentDescriptors => {
dispatch_job(&dispatcher, MigrateCDsJob::default(), run_request.sync).await?
}
JobType::CalculateSizes => calculate_all_sizes(ctx).await?,
JobType::CheckIntegrity => job_dao.check_integrity().await?,
JobType::Vacuum => job_dao.vacuum().await?,
JobType::GenerateThumbnails => job_dao.generate_missing_thumbnails().await?,
JobType::CheckIntegrity => {
dispatch_job(&dispatcher, CheckIntegrityJob::default(), run_request.sync).await?
}
JobType::Vacuum => {
dispatch_job(&dispatcher, VacuumJob::default(), run_request.sync).await?
}
JobType::GenerateThumbnails => {
dispatch_job(
&dispatcher,
GenerateMissingThumbsJob::default(),
run_request.sync,
)
.await?
}
}
Ok(Response::empty())
}
#[tracing::instrument(skip_all)]
pub async fn is_job_running(ctx: &Context, event: Event) -> IPCResult<Response> {
let job_type = event.payload::<JobType>()?;
let dispatcher = get_job_dispatcher_from_context(ctx).await;
let running = match job_type {
JobType::MigrateContentDescriptors => {
is_job_running::<MigrateCDsJob>(&dispatcher).await
}
JobType::CalculateSizes => is_job_running::<CalculateSizesJob>(&dispatcher).await,
JobType::GenerateThumbnails => {
is_job_running::<GenerateMissingThumbsJob>(&dispatcher).await
}
JobType::CheckIntegrity => is_job_running::<CheckIntegrityJob>(&dispatcher).await,
JobType::Vacuum => is_job_running::<VacuumJob>(&dispatcher).await,
};
Response::payload(ctx, running)
}
}
async fn dispatch_job<J: 'static + Job>(
dispatcher: &JobDispatcher,
job: J,
sync: bool,
) -> RepoResult<()> {
let mut handle = if let Some(handle) = dispatcher.get_handle::<J>().await {
if handle.state().await == JobState::Running {
handle
} else {
dispatcher.dispatch(job).await
}
} else {
dispatcher.dispatch(job).await
};
if sync {
if let Some(result) = handle.take_result().await {
result?;
}
}
Ok(())
}
async fn calculate_all_sizes(ctx: &Context) -> RepoResult<()> {
let size_types = vec![
SizeType::Total,
SizeType::FileFolder,
SizeType::ThumbFolder,
SizeType::DatabaseFile,
];
for size_type in size_types {
let size = calculate_size(&size_type, ctx).await?;
let (repo_path, settings) = {
let data = ctx.data.read().await;
(
data.get::<RepoPathKey>().unwrap().clone(),
data.get::<SettingsKey>().unwrap().clone(),
)
};
let job = CalculateSizesJob::new(repo_path, settings);
let dispatcher = get_job_dispatcher_from_context(ctx).await;
let handle = dispatcher.dispatch(job).await;
let mut rx = {
let status = handle.status().read().await;
status.sizes_channel.subscribe()
};
while let Ok((size_type, size)) = rx.recv().await {
let mut data = ctx.data.write().await;
let size_map = data.get_mut::<SizeMetadataKey>().unwrap();
size_map.insert(size_type, size);
@ -60,3 +128,12 @@ async fn calculate_all_sizes(ctx: &Context) -> RepoResult<()> {
Ok(())
}
async fn is_job_running<T: 'static + Job>(dispatcher: &JobDispatcher) -> bool {
if let Some(handle) = dispatcher.get_handle::<T>().await {
let state = handle.state().await;
state == JobState::Running
} else {
false
}
}

@ -2,13 +2,14 @@ use std::path::PathBuf;
use tokio::fs;
use crate::TypeMap;
use mediarepo_core::bromine::prelude::*;
use mediarepo_core::mediarepo_api::types::repo::{
FrontendState, RepositoryMetadata, SizeMetadata, SizeType,
};
use mediarepo_core::type_keys::{RepoPathKey, SettingsKey, SizeMetadataKey};
use crate::utils::{calculate_size, get_repo_from_context};
use crate::utils::get_repo_from_context;
pub struct RepoNamespace;
@ -56,7 +57,7 @@ impl RepoNamespace {
let size = if let Some(size) = size_cache.get(&size_type) {
*size
} else {
calculate_size(&size_type, ctx).await?
0
};
ctx.response(SizeMetadata { size, size_type })
@ -94,7 +95,7 @@ async fn get_frontend_state_path(ctx: &Context) -> IPCResult<PathBuf> {
let data = ctx.data.read().await;
let settings = data.get::<SettingsKey>().unwrap();
let repo_path = data.get::<RepoPathKey>().unwrap();
let state_path = settings.paths.frontend_state_file_path(&repo_path);
let state_path = settings.paths.frontend_state_file_path(repo_path);
Ok(state_path)
}

@ -1,18 +1,15 @@
use std::sync::Arc;
use tokio::fs;
use crate::TypeMap;
use mediarepo_core::bromine::ipc::context::Context;
use mediarepo_core::content_descriptor::decode_content_descriptor;
use mediarepo_core::error::{RepoError, RepoResult};
use mediarepo_core::mediarepo_api::types::identifier::FileIdentifier;
use mediarepo_core::mediarepo_api::types::repo::SizeType;
use mediarepo_core::type_keys::{RepoPathKey, SettingsKey};
use mediarepo_core::utils::get_folder_size;
use mediarepo_logic::dao::DaoProvider;
use mediarepo_logic::dao::repo::Repo;
use mediarepo_logic::dao::DaoProvider;
use mediarepo_logic::dto::FileDto;
use mediarepo_logic::type_keys::RepoKey;
use mediarepo_worker::job_dispatcher::{DispatcherKey, JobDispatcher};
pub async fn get_repo_from_context(ctx: &Context) -> Arc<Repo> {
let data = ctx.data.read().await;
@ -20,6 +17,11 @@ pub async fn get_repo_from_context(ctx: &Context) -> Arc<Repo> {
Arc::clone(repo)
}
pub async fn get_job_dispatcher_from_context(ctx: &Context) -> JobDispatcher {
let data = ctx.data.read().await;
data.get::<DispatcherKey>().unwrap().clone()
}
pub async fn file_by_identifier(identifier: FileIdentifier, repo: &Repo) -> RepoResult<FileDto> {
let file = match identifier {
FileIdentifier::ID(id) => repo.file().by_id(id).await,
@ -31,37 +33,9 @@ pub async fn file_by_identifier(identifier: FileIdentifier, repo: &Repo) -> Repo
pub async fn cd_by_identifier(identifier: FileIdentifier, repo: &Repo) -> RepoResult<Vec<u8>> {
match identifier {
FileIdentifier::ID(id) => {
let file = repo
.file()
.by_id(id)
.await?
.ok_or_else(|| "Thumbnail not found")?;
let file = repo.file().by_id(id).await?.ok_or("Thumbnail not found")?;
Ok(file.cd().to_owned())
}
FileIdentifier::CD(cd) => decode_content_descriptor(cd),
}
}
pub async fn calculate_size(size_type: &SizeType, ctx: &Context) -> RepoResult<u64> {
let repo = get_repo_from_context(ctx).await;
let (repo_path, settings) = {
let data = ctx.data.read().await;
(
data.get::<RepoPathKey>().unwrap().clone(),
data.get::<SettingsKey>().unwrap().clone(),
)
};
let size = match &size_type {
SizeType::Total => get_folder_size(repo_path).await?,
SizeType::FileFolder => repo.get_main_store_size().await?,
SizeType::ThumbFolder => repo.get_thumb_store_size().await?,
SizeType::DatabaseFile => {
let db_path = settings.paths.db_file_path(&repo_path);
let database_metadata = fs::metadata(db_path).await?;
database_metadata.len()
}
};
Ok(size)
}

@ -0,0 +1,31 @@
[package]
name = "mediarepo-worker"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
async-trait = "0.1.53"
tracing = "0.1.33"
[dependencies.mediarepo-core]
path = "../mediarepo-core"
[dependencies.mediarepo-logic]
path = "../mediarepo-logic"
[dependencies.mediarepo-database]
path = "../mediarepo-database"
[dependencies.tokio]
version = "1.21.2"
features = ["macros"]
[dependencies.chrono]
version = "0.4.19"
features = ["serde"]
[dependencies.serde]
version = "1.0.136"
features = ["derive"]

@ -0,0 +1,101 @@
use mediarepo_core::error::{RepoError, RepoResult};
use std::ops::{Deref, DerefMut};
use std::sync::Arc;
use tokio::sync::broadcast::{Receiver, Sender};
use tokio::sync::RwLock;
pub struct JobHandle<T: Send + Sync, R: Send + Sync> {
status: Arc<RwLock<T>>,
state: Arc<RwLock<JobState>>,
result_receiver: CloneableReceiver<Arc<RwLock<Option<RepoResult<R>>>>>,
}
impl<T: Send + Sync, R: Send + Sync> Clone for JobHandle<T, R> {
fn clone(&self) -> Self {
Self {
status: self.status.clone(),
state: self.state.clone(),
result_receiver: self.result_receiver.clone(),
}
}
}
impl<T: Send + Sync, R: Send + Sync> JobHandle<T, R> {
pub fn new(
status: Arc<RwLock<T>>,
state: Arc<RwLock<JobState>>,
result_receiver: CloneableReceiver<Arc<RwLock<Option<RepoResult<R>>>>>,
) -> Self {
Self {
status,
state,
result_receiver,
}
}
pub async fn state(&self) -> JobState {
*self.state.read().await
}
pub fn status(&self) -> &Arc<RwLock<T>> {
&self.status
}
pub async fn result(&mut self) -> Arc<RwLock<Option<RepoResult<R>>>> {
match self.result_receiver.recv().await {
Ok(v) => v,
Err(e) => Arc::new(RwLock::new(Some(Err(RepoError::from(&*e.to_string()))))),
}
}
pub async fn take_result(&mut self) -> Option<RepoResult<R>> {
let shared_result = self.result().await;
let mut result = shared_result.write().await;
result.take()
}
}
#[derive(Clone, Copy, Debug, Ord, PartialOrd, Eq, PartialEq)]
pub enum JobState {
Queued,
Scheduled,
Running,
Finished,
}
pub struct CloneableReceiver<T: Clone> {
receiver: Receiver<T>,
sender: Sender<T>,
}
impl<T: Clone> CloneableReceiver<T> {
pub fn new(sender: Sender<T>) -> Self {
Self {
receiver: sender.subscribe(),
sender,
}
}
}
impl<T: Clone> Clone for CloneableReceiver<T> {
fn clone(&self) -> Self {
Self {
sender: self.sender.clone(),
receiver: self.sender.subscribe(),
}
}
}
impl<T: Clone> Deref for CloneableReceiver<T> {
type Target = Receiver<T>;
fn deref(&self) -> &Self::Target {
&self.receiver
}
}
impl<T: Clone> DerefMut for CloneableReceiver<T> {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.receiver
}
}

@ -0,0 +1,139 @@
use crate::handle::{CloneableReceiver, JobHandle, JobState};
use crate::jobs::{Job, JobTypeKey};
use mediarepo_core::error::RepoError;
use mediarepo_core::tokio_graceful_shutdown::SubsystemHandle;
use mediarepo_core::trait_bound_typemap::{SendSyncTypeMap, TypeMap, TypeMapKey};
use mediarepo_logic::dao::repo::Repo;
use mediarepo_logic::dao::DaoProvider;
use std::sync::Arc;
use std::time::Duration;
use tokio::sync::broadcast::channel;
use tokio::sync::RwLock;
use tokio::time::Instant;
#[derive(Clone)]
pub struct JobDispatcher {
subsystem: SubsystemHandle,
job_handle_map: Arc<RwLock<SendSyncTypeMap>>,
repo: Arc<Repo>,
}
impl JobDispatcher {
pub fn new(subsystem: SubsystemHandle, repo: Repo) -> Self {
Self {
job_handle_map: Arc::new(RwLock::new(SendSyncTypeMap::new())),
subsystem,
repo: Arc::new(repo),
}
}
pub async fn dispatch<T: 'static + Job>(&self, job: T) -> JobHandle<T::JobStatus, T::Result> {
self._dispatch(job, None).await
}
pub async fn dispatch_periodically<T: 'static + Job>(
&self,
job: T,
interval: Duration,
) -> JobHandle<T::JobStatus, T::Result> {
self._dispatch(job, Some(interval)).await
}
#[tracing::instrument(level = "debug", skip_all)]
async fn _dispatch<T: 'static + Job>(
&self,
job: T,
interval: Option<Duration>,
) -> JobHandle<T::JobStatus, T::Result> {
let status = job.status();
let state = Arc::new(RwLock::new(JobState::Queued));
let (sender, mut receiver) = channel(1);
self.subsystem
.start::<RepoError, _, _>("channel-consumer", move |subsystem| async move {
tokio::select! {
_ = receiver.recv() => (),
_ = subsystem.on_shutdown_requested() => (),
}
Ok(())
});
let receiver = CloneableReceiver::new(sender.clone());
let handle = JobHandle::new(status.clone(), state.clone(), receiver);
self.add_handle::<T>(handle.clone()).await;
let repo = self.repo.clone();
self.subsystem
.start::<RepoError, _, _>("worker-job", move |subsystem| async move {
loop {
let start = Instant::now();
let job_2 = job.clone();
{
let mut state = state.write().await;
*state = JobState::Running;
}
if let Err(e) = job.load_state(repo.job()).await {
tracing::error!("failed to load the jobs state: {}", e);
}
let result = tokio::select! {
_ = subsystem.on_shutdown_requested() => {
job_2.save_state(repo.job()).await
}
r = job.run(repo.clone()) => {
match r {
Err(e) => Err(e),
Ok(v) => {
let _ = sender.send(Arc::new(RwLock::new(Some(Ok(v)))));
job.save_state(repo.job()).await
}
}
}
};
if let Err(e) = result {
tracing::error!("job failed with error: {}", e);
let _ = sender.send(Arc::new(RwLock::new(Some(Err(e)))));
}
if let Some(interval) = interval {
{
let mut state = state.write().await;
*state = JobState::Scheduled;
}
let sleep_duration = interval - start.elapsed();
tokio::select! {
_ = tokio::time::sleep(sleep_duration) => {},
_ = subsystem.on_shutdown_requested() => {break}
}
} else {
let mut state = state.write().await;
*state = JobState::Finished;
break;
}
}
Ok(())
});
handle
}
#[inline]
async fn add_handle<T: 'static + Job>(&self, handle: JobHandle<T::JobStatus, T::Result>) {
let mut status_map = self.job_handle_map.write().await;
status_map.insert::<JobTypeKey<T>>(handle);
}
#[inline]
pub async fn get_handle<T: 'static + Job>(&self) -> Option<JobHandle<T::JobStatus, T::Result>> {
let map = self.job_handle_map.read().await;
map.get::<JobTypeKey<T>>().cloned()
}
}
pub struct DispatcherKey;
impl TypeMapKey for DispatcherKey {
type Value = JobDispatcher;
}
unsafe impl Send for JobDispatcher {}
unsafe impl Sync for JobDispatcher {}

@ -0,0 +1,91 @@
use crate::jobs::Job;
use crate::status_utils::SimpleProgress;
use async_trait::async_trait;
use mediarepo_core::error::{RepoError, RepoResult};
use mediarepo_core::mediarepo_api::types::repo::SizeType;
use mediarepo_core::settings::Settings;
use mediarepo_core::utils::get_folder_size;
use mediarepo_logic::dao::repo::Repo;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use tokio::fs;
use tokio::sync::broadcast::{self, Sender};
use tokio::sync::RwLock;
pub struct CalculateSizesState {
pub progress: SimpleProgress,
pub sizes_channel: Sender<(SizeType, u64)>,
}
#[derive(Clone)]
pub struct CalculateSizesJob {
repo_path: PathBuf,
settings: Settings,
state: Arc<RwLock<CalculateSizesState>>,
}
impl CalculateSizesJob {
pub fn new(repo_path: PathBuf, settings: Settings) -> Self {
let (tx, _) = broadcast::channel(4);
Self {
repo_path,
settings,
state: Arc::new(RwLock::new(CalculateSizesState {
sizes_channel: tx,
progress: SimpleProgress::new(4),
})),
}
}
}
#[async_trait]
impl Job for CalculateSizesJob {
type JobStatus = CalculateSizesState;
type Result = ();
fn status(&self) -> Arc<RwLock<Self::JobStatus>> {
self.state.clone()
}
#[tracing::instrument(level = "debug", skip_all)]
async fn run(&self, repo: Arc<Repo>) -> RepoResult<()> {
let size_types = vec![
SizeType::Total,
SizeType::FileFolder,
SizeType::ThumbFolder,
SizeType::DatabaseFile,
];
for size_type in size_types {
let size = calculate_size(&size_type, &repo, &self.repo_path, &self.settings).await?;
let mut state = self.state.write().await;
state
.sizes_channel
.send((size_type, size))
.map_err(|_| RepoError::from("failed to broadcast new size"))?;
state.progress.tick();
}
Ok(())
}
}
async fn calculate_size(
size_type: &SizeType,
repo: &Repo,
repo_path: &Path,
settings: &Settings,
) -> RepoResult<u64> {
let size = match &size_type {
SizeType::Total => get_folder_size(repo_path.to_path_buf()).await?,
SizeType::FileFolder => repo.get_main_store_size().await?,
SizeType::ThumbFolder => repo.get_thumb_store_size().await?,
SizeType::DatabaseFile => {
let db_path = settings.paths.db_file_path(repo_path);
let database_metadata = fs::metadata(db_path).await?;
database_metadata.len()
}
};
Ok(size)
}

@ -0,0 +1,32 @@
use crate::jobs::Job;
use crate::status_utils::SimpleProgress;
use async_trait::async_trait;
use mediarepo_core::error::RepoResult;
use mediarepo_logic::dao::repo::Repo;
use mediarepo_logic::dao::DaoProvider;
use std::sync::Arc;
use tokio::sync::RwLock;
#[derive(Clone, Default)]
pub struct CheckIntegrityJob {
progress: Arc<RwLock<SimpleProgress>>,
}
#[async_trait]
impl Job for CheckIntegrityJob {
type JobStatus = SimpleProgress;
type Result = ();
fn status(&self) -> Arc<RwLock<Self::JobStatus>> {
self.progress.clone()
}
async fn run(&self, repo: Arc<Repo>) -> RepoResult<Self::Result> {
repo.job().check_integrity().await?;
{
let mut progress = self.progress.write().await;
progress.set_total(100);
}
Ok(())
}
}

@ -0,0 +1,109 @@
use crate::jobs::{deserialize_state, serialize_state, Job};
use crate::status_utils::SimpleProgress;
use async_trait::async_trait;
use mediarepo_core::error::RepoResult;
use mediarepo_core::thumbnailer::ThumbnailSize;
use mediarepo_database::entities::job_state::JobType;
use mediarepo_logic::dao::job::JobDao;
use mediarepo_logic::dao::repo::Repo;
use mediarepo_logic::dao::DaoProvider;
use serde::{Deserialize, Serialize};
use std::mem;
use std::sync::Arc;
use std::time::{Duration, SystemTime};
use tokio::sync::RwLock;
#[derive(Clone, Default)]
pub struct GenerateMissingThumbsJob {
state: Arc<RwLock<SimpleProgress>>,
inner_state: Arc<RwLock<GenerateThumbsState>>,
}
#[async_trait]
impl Job for GenerateMissingThumbsJob {
type JobStatus = SimpleProgress;
type Result = ();
fn status(&self) -> Arc<RwLock<Self::JobStatus>> {
self.state.clone()
}
async fn load_state(&self, job_dao: JobDao) -> RepoResult<()> {
if let Some(state) = job_dao.state_for_job_type(JobType::GenerateThumbs).await? {
let mut inner_state = self.inner_state.write().await;
let state = deserialize_state::<GenerateThumbsState>(state)?;
let _ = mem::replace(&mut *inner_state, state);
}
Ok(())
}
async fn run(&self, repo: Arc<Repo>) -> RepoResult<()> {
if !self.needs_generation(&repo).await? {
return Ok(());
}
let file_dao = repo.file();
let all_files = file_dao.all().await?;
{
let mut progress = self.state.write().await;
progress.set_total(all_files.len() as u64);
}
for file in all_files {
if file_dao.thumbnails(file.encoded_cd()).await?.is_empty() {
let _ = file_dao
.create_thumbnails(&file, vec![ThumbnailSize::Medium])
.await;
}
{
let mut progress = self.state.write().await;
progress.tick();
}
}
self.refresh_state(&repo).await?;
Ok(())
}
async fn save_state(&self, job_dao: JobDao) -> RepoResult<()> {
let state = self.inner_state.read().await;
let state = serialize_state(JobType::GenerateThumbs, &*state)?;
job_dao.upsert_state(state).await
}
}
impl GenerateMissingThumbsJob {
async fn needs_generation(&self, repo: &Repo) -> RepoResult<bool> {
let repo_counts = repo.get_counts().await?;
let file_count = repo_counts.file_count as u64;
let state = self.inner_state.read().await;
Ok(state.file_count != file_count
|| state.last_run.elapsed().unwrap() > Duration::from_secs(60 * 60))
}
async fn refresh_state(&self, repo: &Repo) -> RepoResult<()> {
let repo_counts = repo.get_counts().await?;
let mut state = self.inner_state.write().await;
state.last_run = SystemTime::now();
state.file_count = repo_counts.file_count as u64;
Ok(())
}
}
#[derive(Serialize, Deserialize)]
struct GenerateThumbsState {
file_count: u64,
last_run: SystemTime,
}
impl Default for GenerateThumbsState {
fn default() -> Self {
Self {
file_count: 0,
last_run: SystemTime::now(),
}
}
}

@ -0,0 +1,65 @@
use crate::jobs::{deserialize_state, serialize_state, Job};
use crate::status_utils::SimpleProgress;
use async_trait::async_trait;
use mediarepo_core::error::RepoResult;
use mediarepo_database::entities::job_state::JobType;
use mediarepo_logic::dao::job::JobDao;
use mediarepo_logic::dao::repo::Repo;
use mediarepo_logic::dao::DaoProvider;
use serde::{Deserialize, Serialize};
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::Arc;
use tokio::sync::RwLock;
#[derive(Clone, Default)]
pub struct MigrateCDsJob {
progress: Arc<RwLock<SimpleProgress>>,
migrated: Arc<AtomicBool>,
}
#[async_trait]
impl Job for MigrateCDsJob {
type JobStatus = SimpleProgress;
type Result = ();
fn status(&self) -> Arc<tokio::sync::RwLock<Self::JobStatus>> {
self.progress.clone()
}
async fn load_state(&self, job_dao: JobDao) -> RepoResult<()> {
if let Some(state) = job_dao.state_for_job_type(JobType::MigrateCDs).await? {
let state = deserialize_state::<MigrationStatus>(state)?;
self.migrated.store(state.migrated, Ordering::SeqCst);
}
Ok(())
}
async fn run(&self, repo: Arc<Repo>) -> RepoResult<Self::Result> {
if self.migrated.load(Ordering::SeqCst) {
return Ok(());
}
let job_dao = repo.job();
job_dao.migrate_content_descriptors().await?;
self.migrated.store(true, Ordering::Relaxed);
{
let mut progress = self.progress.write().await;
progress.set_total(100);
}
Ok(())
}
async fn save_state(&self, job_dao: JobDao) -> RepoResult<()> {
if self.migrated.load(Ordering::Relaxed) {
let state = serialize_state(JobType::MigrateCDs, &MigrationStatus { migrated: true })?;
job_dao.upsert_state(state).await?;
}
Ok(())
}
}
#[derive(Serialize, Deserialize)]
struct MigrationStatus {
pub migrated: bool,
}

@ -0,0 +1,71 @@
mod calculate_sizes;
mod check_integrity;
mod generate_missing_thumbnails;
mod migrate_content_descriptors;
mod vacuum;
pub use calculate_sizes::*;
pub use check_integrity::*;
pub use generate_missing_thumbnails::*;
pub use migrate_content_descriptors::*;
use std::marker::PhantomData;
use std::sync::Arc;
pub use vacuum::*;
use crate::handle::JobHandle;
use async_trait::async_trait;
use mediarepo_core::bincode;
use mediarepo_core::error::{RepoError, RepoResult};
use mediarepo_core::trait_bound_typemap::TypeMapKey;
use mediarepo_database::entities::job_state::JobType;
use mediarepo_logic::dao::job::JobDao;
use mediarepo_logic::dao::repo::Repo;
use mediarepo_logic::dto::{JobStateDto, UpsertJobStateDto};
use serde::de::DeserializeOwned;
use serde::Serialize;
use tokio::sync::RwLock;
type EmptyStatus = Arc<RwLock<()>>;
#[async_trait]
pub trait Job: Clone + Send + Sync {
type JobStatus: Send + Sync;
type Result: Send + Sync;
fn status(&self) -> Arc<RwLock<Self::JobStatus>>;
async fn load_state(&self, _job_dao: JobDao) -> RepoResult<()> {
Ok(())
}
async fn run(&self, repo: Arc<Repo>) -> RepoResult<Self::Result>;
async fn save_state(&self, _job_dao: JobDao) -> RepoResult<()> {
Ok(())
}
}
pub struct JobTypeKey<T: Job>(PhantomData<T>);
impl<T: 'static> TypeMapKey for JobTypeKey<T>
where
T: Job,
{
type Value = JobHandle<T::JobStatus, T::Result>;
}
pub fn deserialize_state<T: DeserializeOwned>(dto: JobStateDto) -> RepoResult<T> {
bincode::deserialize(dto.value()).map_err(RepoError::from)
}
pub fn serialize_state<T: Serialize>(
job_type: JobType,
state: &T,
) -> RepoResult<UpsertJobStateDto> {
let dto = UpsertJobStateDto {
value: bincode::serialize(state)?,
job_type,
};
Ok(dto)
}

@ -0,0 +1,27 @@
use crate::jobs::{EmptyStatus, Job};
use async_trait::async_trait;
use mediarepo_core::error::RepoResult;
use mediarepo_logic::dao::repo::Repo;
use mediarepo_logic::dao::DaoProvider;
use std::sync::Arc;
use tokio::sync::RwLock;
#[derive(Default, Clone)]
pub struct VacuumJob;
#[async_trait]
impl Job for VacuumJob {
type JobStatus = ();
type Result = ();
fn status(&self) -> Arc<RwLock<Self::JobStatus>> {
EmptyStatus::default()
}
#[tracing::instrument(level = "debug", skip_all)]
async fn run(&self, repo: Arc<Repo>) -> RepoResult<()> {
repo.job().vacuum().await?;
Ok(())
}
}

@ -0,0 +1,37 @@
use crate::job_dispatcher::JobDispatcher;
use crate::jobs::{CheckIntegrityJob, MigrateCDsJob};
use mediarepo_core::error::RepoError;
use mediarepo_core::tokio_graceful_shutdown::Toplevel;
use mediarepo_logic::dao::repo::Repo;
use std::time::Duration;
use tokio::sync::oneshot::channel;
pub mod handle;
pub mod job_dispatcher;
pub mod jobs;
pub mod status_utils;
pub async fn start(top_level: Toplevel, repo: Repo) -> (Toplevel, JobDispatcher) {
let (tx, rx) = channel();
let top_level =
top_level.start::<RepoError, _, _>("mediarepo-worker", |subsystem| async move {
let dispatcher = JobDispatcher::new(subsystem, repo);
tx.send(dispatcher.clone())
.map_err(|_| RepoError::from("failed to send dispatcher"))?;
dispatcher
.dispatch_periodically(
CheckIntegrityJob::default(),
Duration::from_secs(60 * 60 * 24),
)
.await;
dispatcher.dispatch(MigrateCDsJob::default()).await;
Ok(())
});
let receiver = rx
.await
.expect("failed to create background job dispatcher");
(top_level, receiver)
}

@ -0,0 +1,39 @@
pub struct SimpleProgress {
pub current: u64,
pub total: u64,
}
impl Default for SimpleProgress {
fn default() -> Self {
Self {
total: 100,
current: 0,
}
}
}
impl SimpleProgress {
pub fn new(total: u64) -> Self {
Self { total, current: 0 }
}
/// Sets the total count
pub fn set_total(&mut self, total: u64) {
self.total = total;
}
/// Increments the current progress by 1
pub fn tick(&mut self) {
self.current += 1;
}
/// Sets the current progress to a defined value
pub fn set_current(&mut self, current: u64) {
self.current = current;
}
/// Returns the total progress in percent
pub fn percent(&self) -> f64 {
(self.current as f64) / (self.total as f64)
}
}

@ -1,64 +1,125 @@
use std::fs;
use std::path::PathBuf;
use std::path::Path;
use console_subscriber::ConsoleLayer;
use opentelemetry::sdk::Resource;
use opentelemetry::{sdk, KeyValue};
use rolling_file::RollingConditionBasic;
use tracing::Level;
use tracing_appender::non_blocking::{NonBlocking, WorkerGuard};
use tracing_flame::FlameLayer;
use tracing_log::LogTracer;
use tracing_subscriber::{
fmt::{self},
Layer, Registry,
};
use tracing_subscriber::filter::{self, Targets};
use tracing_subscriber::fmt::format::FmtSpan;
use tracing_subscriber::layer::SubscriberExt;
use tracing_subscriber::util::SubscriberInitExt;
use tracing_subscriber::{
fmt::{self},
Layer, Registry,
};
use mediarepo_core::settings::LoggingSettings;
use mediarepo_core::tracing_layer_list::DynLayerList;
#[allow(dyn_drop)]
pub type DropGuard = Box<dyn Drop>;
pub fn init_tracing(repo_path: &PathBuf, log_cfg: &LoggingSettings) -> Vec<DropGuard> {
pub fn init_tracing(repo_path: &Path, log_cfg: &LoggingSettings) -> Vec<DropGuard> {
LogTracer::init().expect("failed to subscribe to log entries");
let log_path = repo_path.join("logs");
let mut guards = Vec::new();
let mut layer_list = DynLayerList::new();
if !log_path.exists() {
fs::create_dir(&log_path).expect("failed to create directory for log files");
}
let (stdout_writer, guard) = tracing_appender::non_blocking(std::io::stdout());
guards.push(Box::new(guard) as DropGuard);
let stdout_layer = fmt::layer()
.with_thread_names(false)
.with_target(true)
.with_writer(stdout_writer)
.with_span_events(FmtSpan::NEW | FmtSpan::CLOSE)
.with_filter(
std::env::var("RUST_LOG")
.unwrap_or(String::from("info,sqlx=warn"))
.parse::<filter::Targets>()
.unwrap_or(
add_stdout_layer(&mut guards, &mut layer_list);
add_sql_layer(log_cfg, &log_path, &mut guards, &mut layer_list);
add_bromine_layer(log_cfg, &log_path, &mut guards, &mut layer_list);
add_app_log_layer(log_cfg, &log_path, &mut guards, &mut layer_list);
if log_cfg.telemetry {
add_telemetry_layer(log_cfg, &mut layer_list);
}
let tokio_console_enabled = std::env::var("TOKIO_CONSOLE")
.map(|v| v.eq_ignore_ascii_case("true"))
.unwrap_or(false);
if tokio_console_enabled {
add_tokio_console_layer(&mut layer_list);
}
let registry = Registry::default().with(layer_list);
tracing::subscriber::set_global_default(registry).expect("Failed to initialize tracing");
guards
}
fn add_tokio_console_layer(layer_list: &mut DynLayerList<Registry>) {
let console_layer = ConsoleLayer::builder().with_default_env().spawn();
layer_list.add(console_layer);
}
fn add_telemetry_layer(log_cfg: &LoggingSettings, layer_list: &mut DynLayerList<Registry>) {
match opentelemetry_jaeger::new_pipeline()
.with_agent_endpoint(&log_cfg.telemetry_endpoint)
.with_trace_config(
sdk::trace::Config::default()
.with_resource(Resource::new(vec![KeyValue::new(
"service.name",
"mediarepo-daemon",
)]))
.with_max_attributes_per_span(1),
)
.with_instrumentation_library_tags(false)
.with_service_name("mediarepo-daemon")
.install_batch(opentelemetry::runtime::Tokio)
{
Ok(tracer) => {
let telemetry_layer = tracing_opentelemetry::layer()
.with_tracer(tracer)
.with_filter(
filter::Targets::new()
.with_default(Level::INFO)
.with_target("sqlx", Level::WARN),
),
);
.with_target("tokio", Level::INFO)
.with_target("h2", Level::INFO)
.with_target("sqlx", Level::ERROR)
.with_target("sea_orm", Level::INFO),
);
layer_list.add(telemetry_layer);
}
Err(e) => {
eprintln!("Failed to initialize telemetry tracing: {}", e);
}
}
}
let (sql_writer, guard) = get_sql_log_writer(&log_path);
fn add_app_log_layer(
log_cfg: &LoggingSettings,
log_path: &Path,
guards: &mut Vec<DropGuard>,
layer_list: &mut DynLayerList<Registry>,
) {
let (app_log_writer, guard) = get_application_log_writer(log_path);
guards.push(Box::new(guard) as DropGuard);
let sql_layer = fmt::layer()
.with_writer(sql_writer)
let app_log_layer = fmt::layer()
.with_writer(app_log_writer)
.pretty()
.with_ansi(false)
.with_span_events(FmtSpan::NONE)
.with_filter(get_sql_targets(log_cfg.trace_sql));
.with_span_events(FmtSpan::NEW | FmtSpan::CLOSE)
.with_filter(get_app_targets(log_cfg.level.clone().into()));
layer_list.add(app_log_layer);
}
let (bromine_writer, guard) = get_bromine_log_writer(&log_path);
fn add_bromine_layer(
log_cfg: &LoggingSettings,
log_path: &Path,
guards: &mut Vec<DropGuard>,
layer_list: &mut DynLayerList<Registry>,
) {
let (bromine_writer, guard) = get_bromine_log_writer(log_path);
guards.push(Box::new(guard) as DropGuard);
let bromine_layer = fmt::layer()
@ -67,39 +128,51 @@ pub fn init_tracing(repo_path: &PathBuf, log_cfg: &LoggingSettings) -> Vec<DropG
.with_ansi(false)
.with_span_events(FmtSpan::NEW | FmtSpan::CLOSE)
.with_filter(get_bromine_targets(log_cfg.trace_api_calls));
layer_list.add(bromine_layer);
}
let (app_log_writer, guard) = get_application_log_writer(&log_path);
fn add_sql_layer(
log_cfg: &LoggingSettings,
log_path: &Path,
guards: &mut Vec<DropGuard>,
layer_list: &mut DynLayerList<Registry>,
) {
let (sql_writer, guard) = get_sql_log_writer(log_path);
guards.push(Box::new(guard) as DropGuard);
let app_log_layer = fmt::layer()
.with_writer(app_log_writer)
let sql_layer = fmt::layer()
.with_writer(sql_writer)
.pretty()
.with_ansi(false)
.with_span_events(FmtSpan::NEW | FmtSpan::CLOSE)
.with_filter(get_app_targets(log_cfg.level.clone().into()));
let registry = Registry::default()
.with(stdout_layer)
.with(sql_layer)
.with(bromine_layer)
.with(app_log_layer);
.with_span_events(FmtSpan::NONE)
.with_filter(get_sql_targets(log_cfg.trace_sql));
let tokio_console_enabled = std::env::var("TOKIO_CONSOLE")
.map(|v| v.eq_ignore_ascii_case("true"))
.unwrap_or(false);
layer_list.add(sql_layer);
}
if tokio_console_enabled {
let console_layer = ConsoleLayer::builder().with_default_env().spawn();
let registry = registry.with(console_layer);
tracing::subscriber::set_global_default(registry).expect("Failed to initialize tracing");
} else {
tracing::subscriber::set_global_default(registry).expect("Failed to initialize tracing");
}
fn add_stdout_layer(guards: &mut Vec<DropGuard>, layer_list: &mut DynLayerList<Registry>) {
let (stdout_writer, guard) = tracing_appender::non_blocking(std::io::stdout());
guards.push(Box::new(guard) as DropGuard);
guards
let stdout_layer = fmt::layer()
.with_thread_names(false)
.with_target(true)
.with_writer(stdout_writer)
.with_span_events(FmtSpan::NEW | FmtSpan::CLOSE)
.with_filter(
std::env::var("RUST_LOG")
.unwrap_or_else(|_| String::from("info,sqlx=warn"))
.parse::<filter::Targets>()
.unwrap_or_else(|_| {
filter::Targets::new()
.with_default(Level::INFO)
.with_target("sqlx", Level::WARN)
}),
);
layer_list.add(stdout_layer);
}
fn get_sql_log_writer(log_path: &PathBuf) -> (NonBlocking, WorkerGuard) {
fn get_sql_log_writer(log_path: &Path) -> (NonBlocking, WorkerGuard) {
tracing_appender::non_blocking(
rolling_file::BasicRollingFileAppender::new(
log_path.join("sql.log"),
@ -110,7 +183,7 @@ fn get_sql_log_writer(log_path: &PathBuf) -> (NonBlocking, WorkerGuard) {
)
}
fn get_bromine_log_writer(log_path: &PathBuf) -> (NonBlocking, WorkerGuard) {
fn get_bromine_log_writer(log_path: &Path) -> (NonBlocking, WorkerGuard) {
tracing_appender::non_blocking(
rolling_file::BasicRollingFileAppender::new(
log_path.join("bromine.log"),
@ -121,7 +194,7 @@ fn get_bromine_log_writer(log_path: &PathBuf) -> (NonBlocking, WorkerGuard) {
)
}
fn get_application_log_writer(log_path: &PathBuf) -> (NonBlocking, WorkerGuard) {
fn get_application_log_writer(log_path: &Path) -> (NonBlocking, WorkerGuard) {
tracing_appender::non_blocking(
rolling_file::BasicRollingFileAppender::new(
log_path.join("repo.log"),

@ -1,19 +1,23 @@
use std::env;
use std::path::PathBuf;
use std::iter::FromIterator;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use std::time::Duration;
use structopt::StructOpt;
use tokio::fs;
use tokio::io::AsyncWriteExt;
use tokio::runtime;
use tokio::runtime::Runtime;
use mediarepo_core::error::RepoResult;
use mediarepo_core::fs::drop_file::DropFile;
use mediarepo_core::settings::{PathSettings, Settings};
use mediarepo_core::tokio_graceful_shutdown::{SubsystemHandle, Toplevel};
use mediarepo_core::trait_bound_typemap::{CloneSendSyncTypeMap, SendSyncTypeMap, TypeMap};
use mediarepo_core::type_keys::{RepoPathKey, SettingsKey};
use mediarepo_logic::dao::repo::Repo;
use mediarepo_logic::type_keys::RepoKey;
use mediarepo_socket::start_tcp_server;
use mediarepo_worker::job_dispatcher::DispatcherKey;
use crate::utils::{create_paths_for_repo, get_repo, load_settings};
@ -49,7 +53,9 @@ enum SubCommand {
Start,
}
fn main() -> RepoResult<()> {
#[tokio::main]
async fn main() -> RepoResult<()> {
human_panic::setup_panic!();
let mut opt: Opt = Opt::from_args();
opt.repo = env::current_dir().unwrap().join(opt.repo);
@ -66,7 +72,7 @@ fn main() -> RepoResult<()> {
} else {
Settings::default()
};
clean_old_connection_files(&opt.repo)?;
clean_old_connection_files(&opt.repo).await?;
let mut guards = Vec::new();
if opt.profile {
@ -76,10 +82,11 @@ fn main() -> RepoResult<()> {
}
let result = match opt.cmd.clone() {
SubCommand::Init { force } => get_single_thread_runtime().block_on(init(opt, force)),
SubCommand::Start => get_multi_thread_runtime().block_on(start_server(opt, settings)),
SubCommand::Init { force } => init(opt, force).await,
SubCommand::Start => start_server(opt, settings).await,
};
opentelemetry::global::shutdown_tracer_provider();
match result {
Ok(_) => Ok(()),
Err(e) => {
@ -90,23 +97,6 @@ fn main() -> RepoResult<()> {
}
}
fn get_single_thread_runtime() -> Runtime {
log::info!("Using current thread runtime");
runtime::Builder::new_current_thread()
.enable_all()
.max_blocking_threads(1)
.build()
.unwrap()
}
fn get_multi_thread_runtime() -> Runtime {
log::info!("Using multi thread runtime");
runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap()
}
async fn init_repo(opt: &Opt, paths: &PathSettings) -> RepoResult<Repo> {
let repo = get_repo(&opt.repo, paths).await?;
@ -116,19 +106,29 @@ async fn init_repo(opt: &Opt, paths: &PathSettings) -> RepoResult<Repo> {
/// Starts the server
async fn start_server(opt: Opt, settings: Settings) -> RepoResult<()> {
let repo = init_repo(&opt, &settings.paths).await?;
let mut top_level = Toplevel::new();
let (mut top_level, dispatcher) = mediarepo_worker::start(Toplevel::new(), repo.clone()).await;
let mut shared_data = CloneSendSyncTypeMap::new();
shared_data.insert::<RepoKey>(Arc::new(repo));
shared_data.insert::<SettingsKey>(settings.clone());
shared_data.insert::<RepoPathKey>(opt.repo.clone());
shared_data.insert::<DispatcherKey>(dispatcher);
#[cfg(unix)]
{
if settings.server.unix_socket.enabled {
let settings = settings.clone();
let repo_path = opt.repo.clone();
let repo = repo.clone();
let shared_data = shared_data.clone();
top_level = top_level.start("mediarepo-unix-socket", |subsystem| {
Box::pin(async move {
start_and_await_unix_socket(subsystem, repo_path, settings, repo).await?;
Ok(())
start_and_await_unix_socket(
subsystem,
repo_path,
SendSyncTypeMap::from_iter(shared_data),
)
.await?;
RepoResult::Ok(())
})
})
}
@ -137,9 +137,15 @@ async fn start_server(opt: Opt, settings: Settings) -> RepoResult<()> {
if settings.server.tcp.enabled {
top_level = top_level.start("mediarepo-tcp", move |subsystem| {
Box::pin(async move {
start_and_await_tcp_server(subsystem, opt.repo, settings, repo).await?;
Ok(())
start_and_await_tcp_server(
subsystem,
opt.repo,
settings,
SendSyncTypeMap::from_iter(shared_data),
)
.await?;
RepoResult::Ok(())
})
})
}
@ -164,9 +170,9 @@ async fn start_and_await_tcp_server(
subsystem: SubsystemHandle,
repo_path: PathBuf,
settings: Settings,
repo: Repo,
shared_data: SendSyncTypeMap,
) -> RepoResult<()> {
let (address, handle) = start_tcp_server(subsystem.clone(), repo_path.clone(), settings, repo)?;
let (address, handle) = start_tcp_server(subsystem.clone(), settings, shared_data)?;
let (mut file, _guard) = DropFile::new(repo_path.join("repo.tcp")).await?;
file.write_all(&address.into_bytes()).await?;
@ -189,17 +195,10 @@ async fn start_and_await_tcp_server(
async fn start_and_await_unix_socket(
subsystem: SubsystemHandle,
repo_path: PathBuf,
settings: Settings,
repo: Repo,
shared_data: SendSyncTypeMap,
) -> RepoResult<()> {
let socket_path = repo_path.join("repo.sock");
let handle = mediarepo_socket::create_unix_socket(
subsystem.clone(),
socket_path,
repo_path.clone(),
settings,
repo,
)?;
let handle = mediarepo_socket::create_unix_socket(subsystem.clone(), socket_path, shared_data)?;
let _guard = DropFile::from_path(repo_path.join("repo.sock"));
tokio::select! {
@ -244,14 +243,14 @@ async fn init(opt: Opt, force: bool) -> RepoResult<()> {
Ok(())
}
fn clean_old_connection_files(root: &PathBuf) -> RepoResult<()> {
async fn clean_old_connection_files(root: &Path) -> RepoResult<()> {
let paths = ["repo.tcp", "repo.sock"];
for path in paths {
let path = root.join(path);
if path.exists() {
std::fs::remove_file(&path)?;
tokio::fs::remove_file(&path).await?;
}
}

@ -1,14 +1,14 @@
use std::path::PathBuf;
use std::path::Path;
use tokio::fs;
use mediarepo_core::error::RepoResult;
use mediarepo_core::settings::{PathSettings, Settings};
use mediarepo_core::settings::v1::SettingsV1;
use mediarepo_core::settings::{PathSettings, Settings};
use mediarepo_logic::dao::repo::Repo;
/// Loads the settings from a toml path
pub fn load_settings(root_path: &PathBuf) -> RepoResult<Settings> {
pub fn load_settings(root_path: &Path) -> RepoResult<Settings> {
let contents = std::fs::read_to_string(root_path.join("repo.toml"))?;
if let Ok(settings_v1) = SettingsV1::from_toml_string(&contents) {
@ -21,7 +21,7 @@ pub fn load_settings(root_path: &PathBuf) -> RepoResult<Settings> {
}
}
pub async fn get_repo(root_path: &PathBuf, path_settings: &PathSettings) -> RepoResult<Repo> {
pub async fn get_repo(root_path: &Path, path_settings: &PathSettings) -> RepoResult<Repo> {
Repo::connect(
format!(
"sqlite://{}",
@ -33,7 +33,7 @@ pub async fn get_repo(root_path: &PathBuf, path_settings: &PathSettings) -> Repo
.await
}
pub async fn create_paths_for_repo(root: &PathBuf, settings: &PathSettings) -> RepoResult<()> {
pub async fn create_paths_for_repo(root: &Path, settings: &PathSettings) -> RepoResult<()> {
if !root.exists() {
fs::create_dir_all(&root).await?;
}

@ -1,66 +1,57 @@
{
"root": true,
"ignorePatterns": [
"projects/**/*"
],
"overrides": [
{
"files": [
"*.ts"
],
"parserOptions": {
"project": [
"tsconfig.json"
],
"createDefaultProgram": true
},
"extends": [
"plugin:@angular-eslint/recommended",
"plugin:@angular-eslint/template/process-inline-templates"
],
"rules": {
"@angular-eslint/directive-selector": [
"error",
{
"type": "attribute",
"prefix": "app",
"style": "camelCase"
}
],
"@angular-eslint/component-selector": [
"error",
{
"type": "element",
"prefix": "app",
"style": "kebab-case"
}
],
"quotes": [
"warn",
"double",
{
"avoidEscape": true
}
],
"indent": [
"error",
4,
{
"SwitchCase": 1
}
],
"no-unused-expressions": "warn",
"semi": "error"
}
},
{
"files": [
"*.html"
],
"extends": [
"plugin:@angular-eslint/template/recommended"
],
"rules": {}
}
]
"root": true,
"ignorePatterns": ["projects/**/*"],
"overrides": [
{
"files": ["*.ts"],
"parserOptions": {
"project": ["tsconfig.json"],
"createDefaultProgram": true
},
"extends": [
"plugin:@angular-eslint/recommended",
"plugin:@angular-eslint/template/process-inline-templates"
],
"rules": {
"@angular-eslint/directive-selector": [
"error",
{
"type": "attribute",
"prefix": "app",
"style": "camelCase"
}
],
"@angular-eslint/component-selector": [
"error",
{
"type": "element",
"prefix": "app",
"style": "kebab-case"
}
],
"quotes": [
"warn",
"double",
{
"avoidEscape": true
}
],
"indent": [
"error",
4,
{
"SwitchCase": 1
}
],
"no-unused-expressions": "warn",
"no-extraneous-class": "off",
"semi": "error"
}
},
{
"files": ["*.html"],
"extends": ["plugin:@angular-eslint/template/recommended"],
"rules": {}
}
]
}

@ -0,0 +1,15 @@
[language-server.biome]
command = "biome"
args = ["lsp-proxy"]
[[language]]
name = "typescript"
language-servers = ["typescript-language-server"]
auto-format = true
formatter = { command = "biome" , args = ["format", "--stdin-file-path=file.ts"] }
[[language]]
name = "javascript"
language-servers = ["typescript-language-server", "biome"]
auto-format = true
formatter = { command = "biome" , args = ["format", "--stdin-file-path=file.js"] }

@ -3,7 +3,8 @@
"version": 1,
"cli": {
"packageManager": "yarn",
"defaultCollection": "@angular-eslint/schematics"
"defaultCollection": "@angular-eslint/schematics",
"analytics": "dc09bab7-b1ef-4661-8d46-1da5b61c8e44"
},
"newProjectRoot": "projects",
"projects": {
@ -40,7 +41,9 @@
"src/styles.scss",
"src/material-theme-correction.scss"
],
"scripts": []
"scripts": [
"./node_modules/chart.js/dist/chart.js"
]
},
"configurations": {
"production": {
@ -114,7 +117,9 @@
"src/styles.scss",
"src/material-theme-correction.scss"
],
"scripts": []
"scripts": [
"./node_modules/chart.js/dist/chart.js"
]
}
},
"lint": {

@ -0,0 +1,15 @@
{
"$schema": "https://biomejs.dev/schemas/1.3.3/schema.json",
"organizeImports": {
"enabled": true
},
"linter": {
"enabled": true,
"rules": {
"recommended": true,
"complexity": {
"noStaticOnlyClass": "off"
}
}
}
}

@ -1,62 +1,64 @@
{
"name": "mediarepo-ui",
"version": "1.0.0-rc.2",
"scripts": {
"ng": "ng",
"start": "ng serve",
"build": "ng build",
"watch": "ng build --watch --configuration development",
"watch-prod": "ng build --watch --configuration production",
"test": "ng test",
"lint": "ng lint",
"tauri": "tauri"
},
"private": true,
"dependencies": {
"@angular/animations": "~13.1.2",
"@angular/cdk": "^13.1.2",
"@angular/common": "~13.1.2",
"@angular/compiler": "~13.1.2",
"@angular/core": "~13.1.2",
"@angular/flex-layout": "^13.0.0-beta.36",
"@angular/forms": "~13.1.2",
"@angular/material": "^13.1.2",
"@angular/platform-browser": "~13.1.2",
"@angular/platform-browser-dynamic": "~13.1.2",
"@angular/router": "~13.1.2",
"@ng-icons/core": "^13.2.1",
"@ng-icons/feather-icons": "^13.2.1",
"@ng-icons/material-icons": "^13.2.1",
"@tauri-apps/api": "^1.0.0-beta.8",
"primeicons": "^5.0.0",
"primeng": "^13.0.4",
"rxjs": "~7.5.2",
"tslib": "^2.3.1",
"w3c-keys": "^1.0.3",
"zone.js": "~0.11.4"
},
"devDependencies": {
"@angular-devkit/build-angular": "~13.1.3",
"@angular-eslint/builder": "^13.0.1",
"@angular-eslint/eslint-plugin": "^13.0.1",
"@angular-eslint/eslint-plugin-template": "^13.0.1",
"@angular-eslint/schematics": "^13.0.1",
"@angular-eslint/template-parser": "^13.0.1",
"@angular/cli": "~13.1.3",
"@angular/compiler-cli": "~13.1.2",
"@tauri-apps/cli": "^1.0.0-beta.10",
"@types/file-saver": "^2.0.4",
"@types/jasmine": "~3.10.3",
"@types/node": "^16.11.19",
"@typescript-eslint/eslint-plugin": "5.9.1",
"@typescript-eslint/parser": "^5.9.1",
"eslint": "^8.6.0",
"jasmine-core": "~4.0.0",
"karma": "~6.3.10",
"karma-chrome-launcher": "~3.1.0",
"karma-coverage": "~2.1.0",
"karma-jasmine": "~4.0.1",
"karma-jasmine-html-reporter": "~1.7.0",
"typescript": "~4.5.4"
}
}
"name": "mediarepo-ui",
"version": "1.0.5",
"scripts": {
"ng": "ng",
"start": "ng serve",
"build": "ng build",
"watch": "ng build --watch --configuration development",
"watch-prod": "ng build --watch --configuration production",
"test": "ng test",
"lint": "ng lint",
"tauri": "tauri"
},
"private": true,
"dependencies": {
"@angular/animations": "~13.3.2",
"@angular/cdk": "^13.3.2",
"@angular/common": "~13.3.2",
"@angular/compiler": "~13.3.2",
"@angular/core": "~13.3.2",
"@angular/flex-layout": "^13.0.0-beta.36",
"@angular/forms": "~13.3.2",
"@angular/material": "^13.3.2",
"@angular/platform-browser": "~13.3.2",
"@angular/platform-browser-dynamic": "~13.3.2",
"@angular/router": "~13.3.2",
"@ng-icons/core": "^15.1.0",
"@ng-icons/feather-icons": "^15.1.0",
"@ng-icons/material-icons": "^15.1.0",
"@tauri-apps/api": "^1.5.3",
"chart.js": "^3.7.1",
"primeicons": "^5.0.0",
"primeng": "^13.3.2",
"rxjs": "~7.5.5",
"tslib": "^2.3.1",
"w3c-keys": "^1.0.3",
"zone.js": "~0.11.5"
},
"devDependencies": {
"@angular-devkit/build-angular": "~13.3.2",
"@angular-eslint/builder": "^13.2.0",
"@angular-eslint/eslint-plugin": "^13.2.0",
"@angular-eslint/eslint-plugin-template": "^13.2.0",
"@angular-eslint/schematics": "^13.2.0",
"@angular-eslint/template-parser": "^13.2.0",
"@angular/cli": "~13.3.2",
"@angular/compiler-cli": "~13.3.2",
"@angular/language-service": "^17.1.1",
"@tauri-apps/cli": "^1.5.4",
"@types/file-saver": "^2.0.4",
"@types/jasmine": "~4.0.2",
"@types/node": "^17.0.23",
"@typescript-eslint/eslint-plugin": "5.19.0",
"@typescript-eslint/parser": "^5.19.0",
"eslint": "^8.13.0",
"jasmine-core": "~4.0.0",
"karma": "~6.3.10",
"karma-chrome-launcher": "~3.1.0",
"karma-coverage": "~2.2.0",
"karma-jasmine": "~4.0.2",
"karma-jasmine-html-reporter": "~1.7.0",
"typescript": "~4.6.3"
}
}

File diff suppressed because it is too large Load Diff

@ -1,29 +1,29 @@
[package]
name = "app"
version = "1.0.0-rc.2"
version = "1.0.5"
description = "The UI for the mediarepo media management tool"
authors = ["you"]
license = ""
repository = ""
default-run = "app"
edition = "2018"
edition = "2021"
build = "src/build.rs"
[build-dependencies]
tauri-build = { version = "1.0.0-beta.4", features = [] }
tauri-build = { version = "1.5.1", features = [] }
[dependencies]
serde_json = "1.0.78"
serde_json = "1.0.79"
serde = { version = "1.0.136", features = ["derive"] }
thiserror = "1.0.30"
typemap_rev = "0.1.5"
[dependencies.tauri]
version = "1.0.0-beta.8"
version = "1.5.4"
features = ["dialog-all", "path-all", "shell-all"]
[dependencies.tracing-subscriber]
version = "0.3.8"
version = "0.3.9"
features = ["env-filter"]
[dependencies.mediarepo-api]

@ -1,76 +1,74 @@
{
"package": {
"productName": "mediarepo-ui",
"version": "1.0.0"
},
"build": {
"distDir": "../dist/mediarepo-ui",
"devPath": "http://localhost:4200",
"beforeDevCommand": "yarn start",
"beforeBuildCommand": "yarn build"
},
"tauri": {
"bundle": {
"active": true,
"targets": "all",
"identifier": "net.trivernis.mediarepo",
"icon": [
"icons/32x32.png",
"icons/64x64.png",
"icons/128x128.png",
"icons/128x128@2x.png",
"icons/icon.ico",
"icons/icon.icns"
],
"resources": [],
"externalBin": [],
"copyright": "",
"category": "Productivity",
"shortDescription": "A media management tool",
"longDescription": "",
"deb": {
"depends": [],
"useBootstrapper": false
},
"macOS": {
"frameworks": [],
"minimumSystemVersion": "",
"useBootstrapper": false,
"exceptionDomain": "",
"signingIdentity": null,
"entitlements": null
},
"windows": {
"certificateThumbprint": null,
"digestAlgorithm": "sha256",
"timestampUrl": ""
}
},
"updater": {
"active": false
},
"allowlist": {
"dialog": {
"all": true
},
"shell": {
"all": true
},
"path": {
"all": true
}
},
"windows": [
{
"title": "mediarepo",
"width": 1920,
"height": 1080,
"resizable": true,
"fullscreen": false
}
],
"security": {
"csp": "default-src blob: data: filesystem: ws: wss: http: https: tauri: 'unsafe-eval' 'unsafe-inline' 'self' img-src: 'self' once: thumb: content:"
}
}
"package": {
"productName": "mediarepo-ui",
"version": "1.0.4"
},
"build": {
"distDir": "../dist/mediarepo-ui",
"devPath": "http://localhost:4200",
"beforeDevCommand": "yarn start",
"beforeBuildCommand": "yarn build"
},
"tauri": {
"bundle": {
"active": true,
"targets": ["msi", "app", "dmg", "updater"],
"identifier": "net.trivernis.mediarepo",
"icon": [
"icons/32x32.png",
"icons/64x64.png",
"icons/128x128.png",
"icons/128x128@2x.png",
"icons/icon.ico",
"icons/icon.icns"
],
"resources": [],
"externalBin": [],
"copyright": "",
"category": "Productivity",
"shortDescription": "A media management tool",
"longDescription": "",
"deb": {
"depends": []
},
"macOS": {
"frameworks": [],
"minimumSystemVersion": "",
"exceptionDomain": "",
"signingIdentity": null,
"entitlements": null
},
"windows": {
"certificateThumbprint": null,
"digestAlgorithm": "sha256",
"timestampUrl": ""
}
},
"updater": {
"active": false
},
"allowlist": {
"dialog": {
"all": true
},
"shell": {
"all": true
},
"path": {
"all": true
}
},
"windows": [
{
"title": "mediarepo",
"width": 1920,
"height": 1080,
"resizable": true,
"fullscreen": false
}
],
"security": {
"csp": null
}
}
}

@ -19,6 +19,7 @@ import {
GetSizeRequest,
GetTagsForFilesRequest,
InitRepositoryRequest,
IsJobRunningRequest,
ReadFileRequest,
RemoveRepositoryRequest,
ResolvePathsToFilesRequest,
@ -187,6 +188,10 @@ export class MediarepoApi {
return this.invokePlugin(ApiFunction.RunJob, request);
}
public static async isJobRunning(request: IsJobRunningRequest): Promise<boolean> {
return this.invokePlugin(ApiFunction.IsJobRunning, request);
}
public static async getAllSortingPresets(): Promise<SortingPresetData[]> {
return ShortCache.cached("sorting-presets", () => this.invokePlugin(ApiFunction.GetAllSortingPresets), 1000);
}

@ -28,9 +28,9 @@ export type PropertyQuery = PropertyQueryStatus
export type PropertyQueryStatus = { Status: FileStatus };
export type PropertyQueryFileSize = { FileSize: ValueComparator<number> };
export type PropertyQueryImportedTime = { ImportedTime: ValueComparator<Date> };
export type PropertyQueryChangedTime = { ChangedTime: ValueComparator<Date> };
export type PropertyQueryCreatedTime = { CreatedTime: ValueComparator<Date> };
export type PropertyQueryImportedTime = { ImportedTime: ValueComparator<string> };
export type PropertyQueryChangedTime = { ChangedTime: ValueComparator<string> };
export type PropertyQueryCreatedTime = { CreatedTime: ValueComparator<string> };
export type PropertyQueryTagCount = { TagCount: ValueComparator<number> };
export type PropertyQueryCd = { Cd: string };
export type PropertyQueryId = { Id: number };

@ -40,6 +40,7 @@ export enum ApiFunction {
SetFrontendState = "set_frontend_state",
// jobs
RunJob = "run_job",
IsJobRunning = "is_job_running",
// presets
GetAllSortingPresets = "all_sorting_presets",
AddSortingPreset = "add_sorting_preset",

@ -117,3 +117,7 @@ export type AddSortingPresetRequest = {
export type DeleteSortingPresetRequest = {
id: number
};
export type IsJobRunningRequest = {
jobType: JobType,
}

@ -1,9 +1,14 @@
import {FileBasicData, FileStatus} from "../api-types/files";
import {BehaviorSubject, Observable} from "rxjs";
export class File {
private statusSubject: BehaviorSubject<FileStatus>;
constructor(
private basicData: FileBasicData,
) {
this.statusSubject = new BehaviorSubject(basicData.status);
}
public get rawData(): FileBasicData {
@ -18,15 +23,20 @@ export class File {
return this.basicData.cd;
}
public get status(): FileStatus {
return this.basicData.status;
public get status(): Observable<FileStatus> {
return this.statusSubject.asObservable();
}
public get mimeType(): string {
return this.basicData.mime_type;
}
public set status(value: FileStatus) {
public setStatus(value: FileStatus) {
this.basicData.status = value;
this.statusSubject.next(value);
}
public get mimeType(): string {
return this.basicData.mime_type;
public getStatus(): FileStatus {
return this.basicData.status;
}
}

@ -29,21 +29,21 @@ export class FilterQueryBuilder {
public static importedTime(date: Date, comparator: Comparator, max_date: Date): FilterQuery {
return filterQuery({
ImportedTime: valuesToCompareEnum(date, comparator,
max_date
ImportedTime: valuesToCompareEnum(formatDate(date)!!, comparator,
formatDate(max_date)
)
});
}
public static changedTime(date: Date, comparator: Comparator, max_date: Date): FilterQuery {
return filterQuery({
ChangedTime: valuesToCompareEnum(date, comparator, max_date)
ChangedTime: valuesToCompareEnum(formatDate(date)!!, comparator, formatDate(max_date))
});
}
public static createdTime(date: Date, comparator: Comparator, max_date: Date): FilterQuery {
return filterQuery({
CreatedTime: valuesToCompareEnum(date, comparator, max_date)
CreatedTime: valuesToCompareEnum(formatDate(date)!!, comparator, formatDate(max_date))
});
}
@ -150,6 +150,7 @@ export class FilterQueryBuilder {
}
break;
case "ImportedTime":
console.debug(propertyName, rawComparator, compareValue);
value = this.parsePropertyValue(compareValue, parseDate);
if (value != undefined) {
return this.importedTime(value[0], comparator, value[1]);
@ -263,7 +264,7 @@ function filterQuery(propertyQuery: PropertyQuery): FilterQuery {
return { Property: propertyQuery };
}
function valuesToCompareEnum<T>(min_value: T, comparator: Comparator, max_value?: T): ValueComparator<T> {
function valuesToCompareEnum<T>(min_value: T, comparator: Comparator, max_value: T | undefined): ValueComparator<T> {
switch (comparator) {
case "Less":
return { Less: min_value };
@ -299,9 +300,9 @@ function parseByteSize(value: string): number | undefined {
if (number) {
for (const key of Object.keys(valueMappings)) {
if (checkUnit(key)) {
console.log("key", key, "valueMapping", valueMappings[key]);
console.debug("key", key, "valueMapping", valueMappings[key]);
number *= valueMappings[key];
console.log("number", number);
console.debug("number", number);
break;
}
}
@ -311,7 +312,7 @@ function parseByteSize(value: string): number | undefined {
}
function parseDate(value: string): Date | undefined {
const date = Date.parse(value);
const date = Date.parse(value.toUpperCase());
if (isNaN(date)) {
return undefined;
@ -331,3 +332,13 @@ function parseStatus(value: string): FileStatus | undefined {
return undefined;
}
}
function formatDate(date?: Date): string | undefined {
if (date) {
const pad = (s: number) => s.toString().padStart(2, "0");
return `${date.getFullYear()}-${pad(date.getMonth() + 1)}-${pad(date.getDate())}T${pad(date.getHours())}:${pad(
date.getMinutes())}:${pad(
date.getSeconds())}`;
}
return;
}

@ -1,7 +1,7 @@
<div id="content">
<mat-tab-group #tabGroup (selectedTabChange)="this.onTabSelectionChange($event)" animationDuration="0"
class="main-tab-group">
<mat-tab [label]="this.selectedRepository? 'RepositoryData' : 'Repositories'">
<mat-tab [label]="this.selectedRepository? 'Repository' : 'Repositories'">
<app-repositories-tab></app-repositories-tab>
</mat-tab>
<mat-tab *ngFor="let tab of tabs; trackBy: trackByTabId">

@ -17,7 +17,7 @@ import {MatSelectModule} from "@angular/material/select";
import {MatCheckboxModule} from "@angular/material/checkbox";
import {MatDividerModule} from "@angular/material/divider";
import {NgIconsModule} from "@ng-icons/core";
import * as materialIcons from "@ng-icons/material-icons";
import {MatMoreVert, MatPlus} from "@ng-icons/material-icons/baseline";
import {MatMenuModule} from "@angular/material/menu";
import {InputModule} from "../shared/input/input.module";
import {SidebarModule} from "../shared/sidebar/sidebar.module";
@ -34,12 +34,14 @@ import {TagModule} from "../shared/tag/tag.module";
import {
DownloadDaemonDialogComponent
} from "./repositories-tab/download-daemon-dialog/download-daemon-dialog.component";
import {RepositoryModule} from "../shared/repository/repository/repository.module";
import {RepositoryModule} from "../shared/repository/repository.module";
import {MatToolbarModule} from "@angular/material/toolbar";
import {
RepositoryDetailsViewComponent
} from "./repositories-tab/repository-details-view/repository-details-view.component";
import { EmptyTabComponent } from './empty-tab/empty-tab.component';
import {EmptyTabComponent} from "./empty-tab/empty-tab.component";
import {RepositoryOverviewComponent} from "./repositories-tab/repository-overview/repository-overview.component";
import {AboutDialogComponent} from "./repositories-tab/repository-overview/about-dialog/about-dialog.component";
@NgModule({
@ -54,6 +56,8 @@ import { EmptyTabComponent } from './empty-tab/empty-tab.component';
DownloadDaemonDialogComponent,
RepositoryDetailsViewComponent,
EmptyTabComponent,
RepositoryOverviewComponent,
AboutDialogComponent,
],
exports: [
CoreComponent,
@ -68,7 +72,7 @@ import { EmptyTabComponent } from './empty-tab/empty-tab.component';
MatProgressBarModule,
MatCheckboxModule,
ScrollingModule,
NgIconsModule.withIcons({ ...materialIcons }),
NgIconsModule.withIcons({ MatPlus, MatMoreVert }),
FlexModule,
MatButtonModule,
MatMenuModule,
@ -85,7 +89,7 @@ import { EmptyTabComponent } from './empty-tab/empty-tab.component';
MatInputModule,
TagModule,
RepositoryModule,
MatToolbarModule,
MatToolbarModule
]
})
export class CoreModule {

@ -1,18 +1,4 @@
<div *ngIf="!selectedRepository" class="repo-page-content">
<div class="add-repo-tools">
<button (click)="openAddRepositoryDialog()" color="primary" mat-flat-button>Add Repository</button>
</div>
<div class="repository-list">
<div *ngFor="let repository of repositories" class="repository-container">
<app-repository-card (openEvent)="this.onOpenRepository($event)"
[repository]="repository"></app-repository-card>
</div>
<app-middle-centered *ngIf="this.repositories.length === 0" class="add-repository-prompt">
<h1>There are no repositories yet. You can create a repository or add an existing one.</h1>
<button (click)="this.openAddRepositoryDialog()" color="primary" mat-flat-button>Add Repository</button>
</app-middle-centered>
</div>
</div>
<div *ngIf="selectedRepository" class="repo-details">
<app-repository-details-view [repository]="selectedRepository"></app-repository-details-view>
</div>
<app-repository-overview *ngIf="!this.selectedRepository"></app-repository-overview>
<app-repository-details-view *ngIf="this.selectedRepository"
[repository]="this.selectedRepository"></app-repository-details-view>

@ -1,41 +1,3 @@
.repository-container {
margin: 1em;
}
.repo-page-content {
margin: 0 10%;
height: calc(100% - 2em);
}
.add-repo-tools {
height: 5em;
display: flex;
flex-direction: row-reverse;
button {
margin: 1em;
}
}
.repository-list {
display: flex;
flex-direction: column;
overflow-y: auto;
height: calc(100% - 5em);
}
app-repository-card {
position: relative;
}
app-repository-details-view, .repo-details {
app-repository-details-view {
height: 100%;
}
.add-repository-prompt {
button {
font-size: 1.5em;
padding: 0.5em 1em;
border-radius: 0.5em;
}
}

@ -1,164 +1,18 @@
import {AfterViewInit, Component, OnInit} from "@angular/core";
import {Repository} from "../../../../api/models/Repository";
import {Component} from "@angular/core";
import {RepositoryService} from "../../../services/repository/repository.service";
import {MatDialog, MatDialogRef} from "@angular/material/dialog";
import {DownloadDaemonDialogComponent} from "./download-daemon-dialog/download-daemon-dialog.component";
import {
AddRepositoryDialogComponent
} from "../../shared/repository/repository/add-repository-dialog/add-repository-dialog.component";
import {LoggingService} from "../../../services/logging/logging.service";
import {BehaviorSubject} from "rxjs";
import {BusyDialogComponent} from "../../shared/app-common/busy-dialog/busy-dialog.component";
import {JobService} from "../../../services/job/job.service";
import {StateService} from "../../../services/state/state.service";
import {Repository} from "../../../../api/models/Repository";
type BusyDialogContext = { message: BehaviorSubject<string>, dialog: MatDialogRef<BusyDialogComponent> };
@Component({
selector: "app-repositories-tab",
templateUrl: "./repositories-tab.component.html",
styleUrls: ["./repositories-tab.component.scss"]
})
export class RepositoriesTabComponent implements OnInit, AfterViewInit {
public repositories: Repository[] = [];
public selectedRepository?: Repository;
constructor(
private logger: LoggingService,
private repoService: RepositoryService,
private jobService: JobService,
private stateService: StateService,
public dialog: MatDialog
) {
}
ngOnInit(): void {
this.repoService.repositories.subscribe({
next: (repos) => {
this.repositories = repos;
}
});
this.repoService.selectedRepository.subscribe(
repo => this.selectedRepository = repo);
}
export class RepositoriesTabComponent {
public async ngAfterViewInit() {
await this.checkAndPromptDaemonExecutable();
}
public async startDaemonAndSelectRepository(repository: Repository) {
try {
let dialogContext = this.openStartupDialog(repository);
let daemonRunning = await this.repoService.checkDaemonRunning(
repository.path!);
if (!daemonRunning) {
dialogContext.message.next("Starting repository daemon...");
await this.repoService.startDaemon(repository.path!);
await new Promise((res, _) => {
setTimeout(res, 2000); // wait for the daemon to start
});
}
await this.selectRepository(repository, dialogContext);
} catch (err: any) {
this.logger.error(err);
}
}
public async selectRepository(repository: Repository, dialogContext?: BusyDialogContext) {
dialogContext = dialogContext ?? this.openStartupDialog(repository);
try {
dialogContext.message.next("Opening repository...");
await this.repoService.setRepository(repository);
await this.runRepositoryStartupTasks(dialogContext);
dialogContext.message.next("Restoring previous tabs...");
await this.repoService.loadRepositories();
dialogContext.dialog.close(true);
} catch (err: any) {
this.logger.error(err);
dialogContext.message.next(
"Failed to open repository: " + err.toString());
await this.forceCloseRepository();
setTimeout(() => dialogContext!.dialog.close(true), 1000);
}
}
public openAddRepositoryDialog() {
this.dialog.open(AddRepositoryDialogComponent, {
disableClose: true,
minWidth: "30%",
minHeight: "30%",
});
}
public async onOpenRepository(repository: Repository) {
if (!repository.local) {
await this.selectRepository(repository);
} else {
await this.startDaemonAndSelectRepository(repository);
}
}
private async forceCloseRepository() {
try {
await this.repoService.closeSelectedRepository();
} catch {
}
try {
await this.repoService.disconnectSelectedRepository();
} catch {
}
}
private async runRepositoryStartupTasks(dialogContext: BusyDialogContext): Promise<void> {
dialogContext.message.next("Checking integrity...");
await this.jobService.runJob("CheckIntegrity");
dialogContext.message.next("Running a vacuum on the database...");
await this.jobService.runJob("Vacuum");
dialogContext.message.next(
"Migrating content descriptors to new format...");
await this.jobService.runJob("MigrateContentDescriptors");
dialogContext.message.next("Calculating repository sizes...");
await this.jobService.runJob("CalculateSizes", false);
dialogContext.message.next("Generating missing thumbnails...");
await this.jobService.runJob("GenerateThumbnails");
dialogContext.message.next("Finished repository startup");
}
private openStartupDialog(repository: Repository): BusyDialogContext {
const dialogMessage = new BehaviorSubject<string>(
"Opening repository...");
let dialog = this.dialog.open(BusyDialogComponent, {
data: {
title: `Opening repository '${repository.name}'`,
message: dialogMessage,
allowCancel: true,
}, disableClose: true,
minWidth: "30%",
minHeight: "30%",
});
dialog.afterClosed().subscribe(async (result) => {
if (!result) {
await this.forceCloseRepository();
}
});
return { message: dialogMessage, dialog };
}
public selectedRepository?: Repository;
private async checkAndPromptDaemonExecutable() {
if (!await this.repoService.checkDameonConfigured()) {
const result = await this.dialog.open(
DownloadDaemonDialogComponent,
{
disableClose: true,
}
).afterClosed().toPromise();
if (result) {
// recursion avoidance
setTimeout(
async () => await this.checkAndPromptDaemonExecutable(), 0);
}
}
constructor(private repositoryService: RepositoryService) {
const sub = this.repositoryService.selectedRepository.subscribe(repo => this.selectedRepository = repo);
}
}

@ -6,7 +6,7 @@ import {ConfirmDialogComponent} from "../../../shared/app-common/confirm-dialog/
import {BusyIndicatorComponent} from "../../../shared/app-common/busy-indicator/busy-indicator.component";
import {
EditRepositoryDialogComponent
} from "../../../shared/repository/repository/edit-repository-dialog/edit-repository-dialog.component";
} from "../../../shared/repository/edit-repository-dialog/edit-repository-dialog.component";
@Component({
selector: "app-repository-card",

@ -4,45 +4,59 @@
</button>
</mat-toolbar>
<div class="details-content" fxLayout="row">
<div class="repository-metadata" fxFlex="100%">
<h1>Stats</h1>
<app-metadata-entry *ngIf="repository.path" attributeName="Path">{{repository.path}}</app-metadata-entry>
<app-metadata-entry *ngIf="repository.address"
attributeName="Address">{{repository.address}}</app-metadata-entry>
<app-metadata-entry attributeName="File Count">
<mat-progress-bar *ngIf="!metadata"></mat-progress-bar>
{{metadata ? metadata!.file_count.toString() : ''}}
</app-metadata-entry>
<app-metadata-entry attributeName="Tag Count">
<mat-progress-bar *ngIf="!metadata"></mat-progress-bar>
{{metadata ? metadata!.tag_count.toString() : ''}}
</app-metadata-entry>
<app-metadata-entry attributeName="Namespace Count">
<mat-progress-bar *ngIf="!metadata"></mat-progress-bar>
{{metadata ? metadata!.namespace_count.toString() : ''}}
</app-metadata-entry>
<app-metadata-entry attributeName="Mapping Count">
<mat-progress-bar *ngIf="!metadata"></mat-progress-bar>
{{metadata ? metadata!.mapping_count.toString() : ''}}
</app-metadata-entry>
<app-metadata-entry attributeName="Total Size">
<mat-progress-bar *ngIf="(this.totalSize | async) === undefined" mode="indeterminate"></mat-progress-bar>
{{this.totalSize | async}}
</app-metadata-entry>
<app-metadata-entry attributeName="File Folder Size">
<mat-progress-bar *ngIf="(this.fileFolderSize | async) === undefined"
mode="indeterminate"></mat-progress-bar>
{{this.fileFolderSize | async}}
</app-metadata-entry>
<app-metadata-entry attributeName="Thumbnail Folder Size">
<mat-progress-bar *ngIf="(this.thumbFolderSize | async) === undefined"
mode="indeterminate"></mat-progress-bar>
{{this.thumbFolderSize | async}}
</app-metadata-entry>
<app-metadata-entry attributeName="Database File Size">
<mat-progress-bar *ngIf="(this.databaseFileSize | async) === undefined"
mode="indeterminate"></mat-progress-bar>
{{this.databaseFileSize | async}}
</app-metadata-entry>
<div class="repository-metadata" fxFlex="50%">
<div class="stats-container">
<h1>Stats</h1>
<app-metadata-entry *ngIf="repository.path" attributeName="Path">{{repository.path}}</app-metadata-entry>
<app-metadata-entry *ngIf="repository.address"
attributeName="Address">{{repository.address}}</app-metadata-entry>
<app-metadata-entry attributeName="File Count">
<mat-progress-bar *ngIf="!metadata"></mat-progress-bar>
{{metadata ? metadata!.file_count.toString() : ''}}
</app-metadata-entry>
<app-metadata-entry attributeName="Tag Count">
<mat-progress-bar *ngIf="!metadata"></mat-progress-bar>
{{metadata ? metadata!.tag_count.toString() : ''}}
</app-metadata-entry>
<app-metadata-entry attributeName="Namespace Count">
<mat-progress-bar *ngIf="!metadata"></mat-progress-bar>
{{metadata ? metadata!.namespace_count.toString() : ''}}
</app-metadata-entry>
<app-metadata-entry attributeName="Mapping Count">
<mat-progress-bar *ngIf="!metadata"></mat-progress-bar>
{{metadata ? metadata!.mapping_count.toString() : ''}}
</app-metadata-entry>
<app-metadata-entry attributeName="Total Size">
<mat-progress-bar *ngIf="(this.totalSize | async) === undefined"
mode="indeterminate"></mat-progress-bar>
{{this.totalSize | async}}
</app-metadata-entry>
<app-metadata-entry attributeName="File Folder Size">
<mat-progress-bar *ngIf="(this.fileFolderSize | async) === undefined"
mode="indeterminate"></mat-progress-bar>
{{this.fileFolderSize | async}}
</app-metadata-entry>
<app-metadata-entry attributeName="Thumbnail Folder Size">
<mat-progress-bar *ngIf="(this.thumbFolderSize | async) === undefined"
mode="indeterminate"></mat-progress-bar>
{{this.thumbFolderSize | async}}
</app-metadata-entry>
<app-metadata-entry attributeName="Database File Size">
<mat-progress-bar *ngIf="(this.databaseFileSize | async) === undefined"
mode="indeterminate"></mat-progress-bar>
{{this.databaseFileSize | async}}
</app-metadata-entry>
</div>
<div class="repository-charts">
<app-chart *ngIf="this.chartData"
[datasets]="this.chartData"
[labels]="this.chartLabels"
chartType="doughnut"
class="size-chart"
title="Sizes"></app-chart>
</div>
</div>
<div fxFlex="50%">
<app-repository-maintenance class="repo-maintenance"></app-repository-maintenance>
</div>
</div>

@ -1,3 +1,10 @@
@import "src/colors";
:host {
height: 100%;
width: 100%;
}
.repository-name {
text-align: center;
align-self: center;
@ -12,16 +19,45 @@
padding: 1em 1em 1em 3em;
overflow-y: auto;
user-select: none;
margin-left: 20%;
margin-right: 20%;
margin-top: 4em;
margin-left: 2em;
display: flex;
app-metadata-entry {
margin-bottom: 0.5em;
display: block;
}
.stats-container {
margin-left: auto;
margin-right: auto;
display: block;
width: 50%;
}
.repository-charts {
margin-top: 4em;
margin-right: 2em;
height: calc(100% - 4em);;
width: 50%;
display: block;
}
}
.details-content {
height: calc(100% - 64px);
overflow: hidden;
width: 75%;
margin: auto;
background: $background-lighter-05;
}
.size-chart {
min-height: 50%;
max-width: 500px;
margin: auto;
}
.repo-maintenance {
padding: 2em;
}

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save