Giter Club home page Giter Club logo

docker-cloud-media-scripts's People

Contributors

alexyao2015 avatar dhowellbc avatar madslundt avatar nomuas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-cloud-media-scripts's Issues

More verbose output of rclone for uploads

I believe it would be much more beneficial to capture more verbose output of rclone copy in order to verify that files are uploaded and retrieve more introspection into those uploads. Currently, if uploads are interrupted or error, there is nothing to indicate as such (beyond maybe corrupted files) as the notification to the cloudupload.log is transmitted before rclone copy even begins (part of the original issue I encountered).

:shared not visible on host nor other containers

Hi there,

I have been trying to set up everything up on Docker for Synology and all seems ok but I am unable to see any files in local-media nor any other shared (I can see files created inside config and logs) from file station or another container with ubuntu.

The config I used to create the container is:

docker create
--name cloud-media-scripts
-v /volume1/dockersconfig/gdrive/local-media:/local-media:shared
-v /volume1/dockersconfig/gdrive/local-decrypt:/local-decrypt:shared
-v /volume1/dockersconfig/gdrive/cloud-encrypt:/cloud-encrypt:shared
-v /volume1/dockersconfig/gdrive/cloud-decrypt:/cloud-decrypt:shared
-v /volume1/dockersconfig/gdrive/config:/config
-v /volume1/dockersconfig/gdrive/chunks:/chunks
-v /volume1/dockersconfig/gdrive/logs:/log
-e CLEAR_CHUNK_MAX_SIZE="1000G"
-e REMOVE_LOCAL_FILES_WHEN_SPACE_EXCEEDS_GB="2000"
-e FREEUP_ATLEAST_GB="1000"
--privileged --cap-add=MKNOD --cap-add=SYS_ADMIN --device=/dev/fuse
madslundt/cloud-media-scripts

And I can see from logs that everything is mounted with no errors and files can be seen in the container.
Do you know if I am doing something wrong with the creation commands?

Regards

Can't start container with mount :shared is causing issues

Hi all,

after updating my qnap it seems that using :shared property is causing a problem.

When I remove it I can only use the local content (as intended), when I add it I am not able to start the container.

Error response from daemon: linux mounts: Path /share/Media/cloud-encrypt is mounted on /share/CACHEDEV1_DATA but it is not a shared mount.

I can access the folders through /share/Media or through /share/CACHEDEV1_DATA which seems to be linked.

I tried through console something like "mount --make-shared /share/CACHEDEV1_DATA" but I only get an error.

Does anybody has an idea how to mount this share or to solve this problem in general? Normally I would say I even don't need CACHEDEV1_DATA since it worked before without...

Thank you. :-)

Keeps waiting for mount /cloud-decrypt

I completed the container setup following your instructions but container keeps saying "Waiting for mount /cloud-decrypt".

/cloud-encrypt has been properly mounted so rclone should be able to mount /cloud-decrypt. However, I don't see where is the setup is rclone instructed to use /cloud-encrypt.

Shouldn't it be somehow specified in the last "Third and last one is for the local encryption/decryption" step of rclone configuration ?

Thanks !

Prevent Specific Folders from uploading

It would be nice if we could place a file like ".nocloud" in a folder on the local-media mount to prevent that folder from being uploaded to the cloud. This is what I did with my upload scripts before moving my install to docker. Maybe this is already possible and I overlooked it.. Thanks, working great so far though I haven't tried the upload and deletion scripts yet..

Edit: Just realized that since you can pass arguments with the upload script to rclone, you can use the argument --exclude-if-present="" to accomplish what I wanted

Update: For some reason the arguments aren't being passed to rclone. Here is what I have in my crontab:
20 8 * * * docker exec plexdrive rmlocal -v --bwlimit 25M --exclude-if-present=".no_cloud" 20 4 * * * docker exec plexdrive cloudupload -v --bwlimit 25M --exclude-if-present=".no_cloud"
This is an excerpt from ps aux ran inside the container:
root 724 0.0 0.0 18048 2832 ? Ss 04:24 0:00 /bin/bash /usr/bin/rmlocal -v --bwlimit 25M --exclude-if-present=.no_cloud root 761 0.0 0.0 18208 2976 ? S 04:24 0:00 /bin/bash /usr/bin/rmlocal.script -v --bwlimit 25M --exclude-if-present=.no_cloud root 762 0.0 0.0 4380 700 ? S 04:24 0:00 tee /log/rmlocal.log root 811 0.0 0.0 23136 4208 ? S 04:24 0:00 sort -n root 812 0.0 0.0 4384 660 ? S 04:24 0:00 cut -d: -f2- root 813 0.0 0.0 7588 888 ? S 04:24 0:00 awk {$1=$2=$3=""; print $0} root 814 0.0 0.0 18212 2492 ? S 04:24 0:00 /bin/bash /usr/bin/rmlocal.script -v --bwlimit 25M --exclude-if-present=.no_cloud root 991 0.0 0.0 18252 3304 pts/1 Ss 04:24 0:00 /bin/bash root 1070 12.9 1.7 570140 560096 ? Sl 04:25 0:16 rclone move --config=/config/rclone.conf --buffer-size 500M --checkers 16 /local-decrypt/music/somedir/somefile root 1084 0.0 0.0 34424 2888 pts/1 R+ 04:27 0:00 ps aux
The arguments get passed to the script inside the container but not to rclone. Please let me know what I'm doing wrong or if its not just me. Thanks!

Update 2: After further testing, arguments are passed by the cloudupload script but not by the rmlocal script. Also I have found that using the --exclude-if-present or --exclude parameters with rclone won't work on parent directories of the files the scripts are trying to move or copy because since the paths the scripts copy or move are specific they negate what I'm trying to exclude from being moved or copied. So it would be nice if we could tell the script directories we would like excluded. For the time being, since I don't want my music being copied I can use --exclude=".flac" --exclude=".mp3". But not until I can get these parameters to pass to rclone from the rmlocal script

Symlinks seem to point to wrong dir?

So I am going crazy and don't know where the problem lies... to make my life easy I do ln -s /local-media/Media/Shows to /tv and the same /w Movies. Movies looks fine, but yet Shows doesn't?

If I mount the encrypted version to /test it looks fine...

EX: sudo rclone mount --buffer-size 500M --checkers 16 --allow-non-empty --allow-other gd-decrypt:/Media /test &

Looks fine /w /Media/Shows/ShowName and /Media/Movies/MovieNames

Then I do this.... and it mounts to /tv/Shows? That makes no sense...
sudo ln -s /local-media/Media/Shows /tv
yet the /Movies version looks fine?
sudo ln -s /local-media/Media/Movies /movies
The layout is the same, the manually mounted dirs look fine, but the symlinks look off for /local-media?

[Enhancement] Upgrade plexdrive and rclone

Please, upgrade plexdrive to v 5.0 and rclone to v1.46.
Plexdrive v5.0 changelog:
MongoDB replaced by BoltDB
Performance increase
Async deletion of files
MacOS playback issue bugfix
Rename files/directories
Traverse directory structure (find/du)
rclone v1.46 changelog (from Rclone 1.39)
v1.46 - 2019-02-09
New backends
Support Alibaba Cloud (Aliyun) OSS via the s3 backend (Nick Craig-Wood)
New commands
serve dlna: serves a remove via DLNA for the local network (nicolov)
New Features
copy, move: Restore deprecated --no-traverse flag (Nick Craig-Wood)
This is useful for when transferring a small number of files into a large destination
genautocomplete: Add remote path completion for bash completion (Christopher Peterson & Danil Semelenov)
Buffer memory handling reworked to return memory to the OS better (Nick Craig-Wood)
Buffer recycling library to replace sync.Pool
Optionally use memory mapped memory for better memory shrinking
Enable with --use-mmap if having memory problems - not default yet
Parallelise reading of files specified by --files-from (Nick Craig-Wood)
check: Add stats showing total files matched. (Dario Guzik)
Allow rename/delete open files under Windows (Nick Craig-Wood)
lsjson: Use exactly the correct number of decimal places in the seconds (Nick Craig-Wood)
Add cookie support with cmdline switch --use-cookies for all HTTP based remotes (qip)
Warn if --checksum is set but there are no hashes available (Nick Craig-Wood)
Rework rate limiting (pacer) to be more accurate and allow bursting (Nick Craig-Wood)
Improve error reporting for too many/few arguments in commands (Nick Craig-Wood)
listremotes: Remove -l short flag as it conflicts with the new global flag (weetmuts)
Make http serving with auth generate INFO messages on auth fail (Nick Craig-Wood)
Bug Fixes
Fix layout of stats (Nick Craig-Wood)
Fix --progress crash under Windows Jenkins (Nick Craig-Wood)
Fix transfer of google/onedrive docs by calling Rcat in Copy when size is -1 (Cnly)
copyurl: Fix checking of --dry-run (Denis Skovpen)
Mount
Check that mountpoint and local directory to mount don’t overlap (Nick Craig-Wood)
Fix mount size under 32 bit Windows (Nick Craig-Wood)
VFS
Implement renaming of directories for backends without DirMove (Nick Craig-Wood)
now all backends except b2 support renaming directories
Implement --vfs-cache-max-size to limit the total size of the cache (Nick Craig-Wood)
Add --dir-perms and --file-perms flags to set default permissions (Nick Craig-Wood)
Fix deadlock on concurrent operations on a directory (Nick Craig-Wood)
Fix deadlock between RWFileHandle.close and File.Remove (Nick Craig-Wood)
Fix renaming/deleting open files with cache mode “writes” under Windows (Nick Craig-Wood)
Fix panic on rename with --dry-run set (Nick Craig-Wood)
Fix vfs/refresh with recurse=true needing the --fast-list flag
Local
Add support for -l/--links (symbolic link translation) (yair@unicorn)
this works by showing links as link.rclonelink - see local backend docs for more info
this errors if used with -L/--copy-links
Fix renaming/deleting open files on Windows (Nick Craig-Wood)
Crypt
Check for maximum length before decrypting filename to fix panic (Garry McNulty)
Azure Blob
Allow building azureblob backend on *BSD (themylogin)
Use the rclone HTTP client to support --dump headers, --tpslimit etc (Nick Craig-Wood)
Use the s3 pacer for 0 delay in non error conditions (Nick Craig-Wood)
Ignore directory markers (Nick Craig-Wood)
Stop Mkdir attempting to create existing containers (Nick Craig-Wood)
B2
cleanup: will remove unfinished large files >24hrs old (Garry McNulty)
For a bucket limited application key check the bucket name (Nick Craig-Wood)
before this, rclone would use the authorised bucket regardless of what you put on the command line
Added --b2-disable-checksum flag (Wojciech Smigielski)
this enables large files to be uploaded without a SHA-1 hash for speed reasons
Drive
Set default pacer to 100ms for 10 tps (Nick Craig-Wood)
This fits the Google defaults much better and reduces the 403 errors massively
Add --drive-pacer-min-sleep and --drive-pacer-burst to control the pacer
Improve ChangeNotify support for items with multiple parents (Fabian Möller)
Fix ListR for items with multiple parents - this fixes oddities with vfs/refresh (Fabian Möller)
Fix using --drive-impersonate and appfolders (Nick Craig-Wood)
Fix google docs in rclone mount for some (not all) applications (Nick Craig-Wood)
Dropbox
Retry-After support for Dropbox backend (Mathieu Carbou)
FTP
Wait for 60 seconds for a connection to Close then declare it dead (Nick Craig-Wood)
helps with indefinite hangs on some FTP servers
Google Cloud Storage
Update google cloud storage endpoints (weetmuts)
HTTP
Add an example with username and password which is supported but wasn’t documented (Nick Craig-Wood)
Fix backend with --files-from and non-existent files (Nick Craig-Wood)
Hubic
Make error message more informative if authentication fails (Nick Craig-Wood)
Jottacloud
Resume and deduplication support (Oliver Heyme)
Use token auth for all API requests Don’t store password anymore (Sebastian Bünger)
Add support for 2-factor authentification (Sebastian Bünger)
Mega
Implement v2 account login which fixes logins for newer Mega accounts (Nick Craig-Wood)
Return error if an unknown length file is attempted to be uploaded (Nick Craig-Wood)
Add new error codes for better error reporting (Nick Craig-Wood)
Onedrive
Fix broken support for “shared with me” folders (Alex Chen)
Fix root ID not normalised (Cnly)
Return err instead of panic on unknown-sized uploads (Cnly)
Qingstor
Fix go routine leak on multipart upload errors (Nick Craig-Wood)
Add upload chunk size/concurrency/cutoff control (Nick Craig-Wood)
Default --qingstor-upload-concurrency to 1 to work around bug (Nick Craig-Wood)
S3
Implement --s3-upload-cutoff for single part uploads below this (Nick Craig-Wood)
Change --s3-upload-concurrency default to 4 to increase perfomance (Nick Craig-Wood)
Add --s3-bucket-acl to control bucket ACL (Nick Craig-Wood)
Auto detect region for buckets on operation failure (Nick Craig-Wood)
Add GLACIER storage class (William Cocker)
Add Scaleway to s3 documentation (Rémy Léone)
Add AWS endpoint eu-north-1 (weetmuts)
SFTP
Add support for PEM encrypted private keys (Fabian Möller)
Add option to force the usage of an ssh-agent (Fabian Möller)
Perform environment variable expansion on key-file (Fabian Möller)
Fix rmdir on Windows based servers (eg CrushFTP) (Nick Craig-Wood)
Fix rmdir deleting directory contents on some SFTP servers (Nick Craig-Wood)
Fix error on dangling symlinks (Nick Craig-Wood)
Swift
Add --swift-no-chunk to disable segmented uploads in rcat/mount (Nick Craig-Wood)
Introduce application credential auth support (kayrus)
Fix memory usage by slimming Object (Nick Craig-Wood)
Fix extra requests on upload (Nick Craig-Wood)
Fix reauth on big files (Nick Craig-Wood)
Union
Fix poll-interval not working (Nick Craig-Wood)
WebDAV
Support About which means rclone mount will show the correct disk size (Nick Craig-Wood)
Support MD5 and SHA1 hashes with Owncloud and Nextcloud (Nick Craig-Wood)
Fail soft on time parsing errors (Nick Craig-Wood)
Fix infinite loop on failed directory creation (Nick Craig-Wood)
Fix identification of directories for Bitrix Site Manager (Nick Craig-Wood)
Fix upload of 0 length files on some servers (Nick Craig-Wood)
Fix if MKCOL fails with 423 Locked assume the directory exists (Nick Craig-Wood)

v1.45 - 2018-11-24

New backends
    The Yandex backend was re-written - see below for details (Sebastian Bünger)
New commands
    rcd: New command just to serve the remote control API (Nick Craig-Wood)
New Features
    The remote control API (rc) was greatly expanded to allow full control over rclone (Nick Craig-Wood)
        sensitive operations require authorization or the --rc-no-auth flag
        config/* operations to configure rclone
        options/* for reading/setting command line flags
        operations/* for all low level operations, eg copy file, list directory
        sync/* for sync, copy and move
        --rc-files flag to serve files on the rc http server
            this is for building web native GUIs for rclone
        Optionally serving objects on the rc http server
        Ensure rclone fails to start up if the --rc port is in use already
        See the rc docs for more info
    sync/copy/move
        Make --files-from only read the objects specified and don’t scan directories (Nick Craig-Wood)
            This is a huge speed improvement for destinations with lots of files
    filter: Add --ignore-case flag (Nick Craig-Wood)
    ncdu: Add remove function (’d’ key) (Henning Surmeier)
    rc command
        Add --json flag for structured JSON input (Nick Craig-Wood)
        Add --user and --pass flags and interpret --rc-user, --rc-pass, --rc-addr (Nick Craig-Wood)
    build
        Require go1.8 or later for compilation (Nick Craig-Wood)
        Enable softfloat on MIPS arch (Scott Edlund)
        Integration test framework revamped with a better report and better retries (Nick Craig-Wood)
Bug Fixes
    cmd: Make --progress update the stats correctly at the end (Nick Craig-Wood)
    config: Create config directory on save if it is missing (Nick Craig-Wood)
    dedupe: Check for existing filename before renaming a dupe file (ssaqua)
    move: Don’t create directories with --dry-run (Nick Craig-Wood)
    operations: Fix Purge and Rmdirs when dir is not the root (Nick Craig-Wood)
    serve http/webdav/restic: Ensure rclone exits if the port is in use (Nick Craig-Wood)
Mount
    Make --volname work for Windows and macOS (Nick Craig-Wood)
Azure Blob
    Avoid context deadline exceeded error by setting a large TryTimeout value (brused27)
    Fix erroneous Rmdir error “directory not empty” (Nick Craig-Wood)
    Wait for up to 60s to create a just deleted container (Nick Craig-Wood)
Dropbox
    Add dropbox impersonate support (Jake Coggiano)
Jottacloud
    Fix bug in --fast-list handing of empty folders (albertony)
Opendrive
    Fix transfer of files with + and & in (Nick Craig-Wood)
    Fix retries of upload chunks (Nick Craig-Wood)
S3
    Set ACL for server side copies to that provided by the user (Nick Craig-Wood)
    Fix role_arn, credential_source, … (Erik Swanson)
    Add config info for Wasabi’s US-West endpoint (Henry Ptasinski)
SFTP
    Ensure file hash checking is really disabled (Jon Fautley)
Swift
    Add pacer for retries to make swift more reliable (Nick Craig-Wood)
WebDAV
    Add Content-Type to PUT requests (Nick Craig-Wood)
    Fix config parsing so --webdav-user and --webdav-pass flags work (Nick Craig-Wood)
    Add RFC3339 date format (Ralf Hemberger)
Yandex
    The yandex backend was re-written (Sebastian Bünger)
        This implements low level retries (Sebastian Bünger)
        Copy, Move, DirMove, PublicLink and About optional interfaces (Sebastian Bünger)
        Improved general error handling (Sebastian Bünger)
        Removed ListR for now due to inconsistent behaviour (Sebastian Bünger)

v1.44 - 2018-10-15

New commands
    serve ftp: Add ftp server (Antoine GIRARD)
    settier: perform storage tier changes on supported remotes (sandeepkru)
New Features
    Reworked command line help
        Make default help less verbose (Nick Craig-Wood)
        Split flags up into global and backend flags (Nick Craig-Wood)
        Implement specialised help for flags and backends (Nick Craig-Wood)
        Show URL of backend help page when starting config (Nick Craig-Wood)
    stats: Long names now split in center (Joanna Marek)
    Add --log-format flag for more control over log output (dcpu)
    rc: Add support for OPTIONS and basic CORS (frenos)
    stats: show FatalErrors and NoRetryErrors in stats (Cédric Connes)
Bug Fixes
    Fix -P not ending with a new line (Nick Craig-Wood)
    config: don’t create default config dir when user supplies --config (albertony)
    Don’t print non-ASCII characters with --progress on windows (Nick Craig-Wood)
    Correct logs for excluded items (ssaqua)
Mount
    Remove EXPERIMENTAL tags (Nick Craig-Wood)
VFS
    Fix race condition detected by serve ftp tests (Nick Craig-Wood)
    Add vfs/poll-interval rc command (Fabian Möller)
    Enable rename for nearly all remotes using server side Move or Copy (Nick Craig-Wood)
    Reduce directory cache cleared by poll-interval (Fabian Möller)
    Remove EXPERIMENTAL tags (Nick Craig-Wood)
Local
    Skip bad symlinks in dir listing with -L enabled (Cédric Connes)
    Preallocate files on Windows to reduce fragmentation (Nick Craig-Wood)
    Preallocate files on linux with fallocate(2) (Nick Craig-Wood)
Cache
    Add cache/fetch rc function (Fabian Möller)
    Fix worker scale down (Fabian Möller)
    Improve performance by not sending info requests for cached chunks (dcpu)
    Fix error return value of cache/fetch rc method (Fabian Möller)
    Documentation fix for cache-chunk-total-size (Anagh Kumar Baranwal)
    Preserve leading / in wrapped remote path (Fabian Möller)
    Add plex_insecure option to skip certificate validation (Fabian Möller)
    Remove entries that no longer exist in the source (dcpu)
Crypt
    Preserve leading / in wrapped remote path (Fabian Möller)
Alias
    Fix handling of Windows network paths (Nick Craig-Wood)
Azure Blob
    Add --azureblob-list-chunk parameter (Santiago Rodríguez)
    Implemented settier command support on azureblob remote. (sandeepkru)
    Work around SDK bug which causes errors for chunk-sized files (Nick Craig-Wood)
Box
    Implement link sharing. (Sebastian Bünger)
Drive
    Add --drive-import-formats - google docs can now be imported (Fabian Möller)
        Rewrite mime type and extension handling (Fabian Möller)
        Add document links (Fabian Möller)
        Add support for multipart document extensions (Fabian Möller)
        Add support for apps-script to json export (Fabian Möller)
        Fix escaped chars in documents during list (Fabian Möller)
    Add --drive-v2-download-min-size a workaround for slow downloads (Fabian Möller)
    Improve directory notifications in ChangeNotify (Fabian Möller)
    When listing team drives in config, continue on failure (Nick Craig-Wood)
FTP
    Add a small pause after failed upload before deleting file (Nick Craig-Wood)
Google Cloud Storage
    Fix service_account_file being ignored (Fabian Möller)
Jottacloud
    Minor improvement in quota info (omit if unlimited) (albertony)
    Add --fast-list support (albertony)
    Add permanent delete support: --jottacloud-hard-delete (albertony)
    Add link sharing support (albertony)
    Fix handling of reserved characters. (Sebastian Bünger)
    Fix socket leak on Object.Remove (Nick Craig-Wood)
Onedrive
    Rework to support Microsoft Graph (Cnly)
        NB this will require re-authenticating the remote
    Removed upload cutoff and always do session uploads (Oliver Heyme)
    Use single-part upload for empty files (Cnly)
    Fix new fields not saved when editing old config (Alex Chen)
    Fix sometimes special chars in filenames not replaced (Alex Chen)
    Ignore OneNote files by default (Alex Chen)
    Add link sharing support (jackyzy823)
S3
    Use custom pacer, to retry operations when reasonable (Craig Miskell)
    Use configured server-side-encryption and storace class options when calling CopyObject() (Paul Kohout)
    Make --s3-v2-auth flag (Nick Craig-Wood)
    Fix v2 auth on files with spaces (Nick Craig-Wood)
Union
    Implement union backend which reads from multiple backends (Felix Brucker)
    Implement optional interfaces (Move, DirMove, Copy etc) (Nick Craig-Wood)
    Fix ChangeNotify to support multiple remotes (Fabian Möller)
    Fix --backup-dir on union backend (Nick Craig-Wood)
WebDAV
    Add another time format (Nick Craig-Wood)
    Add a small pause after failed upload before deleting file (Nick Craig-Wood)
    Add workaround for missing mtime (buergi)
    Sharepoint: Renew cookies after 12hrs (Henning Surmeier)
Yandex
    Remove redundant nil checks (teresy)

v1.43.1 - 2018-09-07

Point release to fix hubic and azureblob backends.

Bug Fixes
    ncdu: Return error instead of log.Fatal in Show (Fabian Möller)
    cmd: Fix crash with --progress and --stats 0 (Nick Craig-Wood)
    docs: Tidy website display (Anagh Kumar Baranwal)
Azure Blob:
    Fix multi-part uploads. (sandeepkru)
Hubic
    Fix uploads (Nick Craig-Wood)
    Retry auth fetching if it fails to make hubic more reliable (Nick Craig-Wood)

v1.43 - 2018-09-01

New backends
    Jottacloud (Sebastian Bünger)
New commands
    copyurl: copies a URL to a remote (Denis)
New Features
    Reworked config for backends (Nick Craig-Wood)
        All backend config can now be supplied by command line, env var or config file
        Advanced section in the config wizard for the optional items
        A large step towards rclone backends being usable in other go software
        Allow on the fly remotes with :backend: syntax
    Stats revamp
        Add --progress/-P flag to show interactive progress (Nick Craig-Wood)
        Show the total progress of the sync in the stats (Nick Craig-Wood)
        Add --stats-one-line flag for single line stats (Nick Craig-Wood)
    Added weekday schedule into --bwlimit (Mateusz)
    lsjson: Add option to show the original object IDs (Fabian Möller)
    serve webdav: Make Content-Type without reading the file and add --etag-hash (Nick Craig-Wood)
    build
        Build macOS with native compiler (Nick Craig-Wood)
        Update to use go1.11 for the build (Nick Craig-Wood)
    rc
        Added core/stats to return the stats (reddi1)
    version --check: Prints the current release and beta versions (Nick Craig-Wood)
Bug Fixes
    accounting
        Fix time to completion estimates (Nick Craig-Wood)
        Fix moving average speed for file stats (Nick Craig-Wood)
    config: Fix error reading password from piped input (Nick Craig-Wood)
    move: Fix --delete-empty-src-dirs flag to delete all empty dirs on move (ishuah)
Mount
    Implement --daemon-timeout flag for OSXFUSE (Nick Craig-Wood)
    Fix mount --daemon not working with encrypted config (Alex Chen)
    Clip the number of blocks to 2^32-1 on macOS - fixes borg backup (Nick Craig-Wood)
VFS
    Enable vfs-read-chunk-size by default (Fabian Möller)
    Add the vfs/refresh rc command (Fabian Möller)
    Add non recursive mode to vfs/refresh rc command (Fabian Möller)
    Try to seek buffer on read only files (Fabian Möller)
Local
    Fix crash when deprecated --local-no-unicode-normalization is supplied (Nick Craig-Wood)
    Fix mkdir error when trying to copy files to the root of a drive on windows (Nick Craig-Wood)
Cache
    Fix nil pointer deref when using lsjson on cached directory (Nick Craig-Wood)
    Fix nil pointer deref for occasional crash on playback (Nick Craig-Wood)
Crypt
    Fix accounting when checking hashes on upload (Nick Craig-Wood)
Amazon Cloud Drive
    Make very clear in the docs that rclone has no ACD keys (Nick Craig-Wood)
Azure Blob
    Add connection string and SAS URL auth (Nick Craig-Wood)
    List the container to see if it exists (Nick Craig-Wood)
    Port new Azure Blob Storage SDK (sandeepkru)
    Added blob tier, tier between Hot, Cool and Archive. (sandeepkru)
    Remove leading / from paths (Nick Craig-Wood)
B2
    Support Application Keys (Nick Craig-Wood)
    Remove leading / from paths (Nick Craig-Wood)
Box
    Fix upload of > 2GB files on 32 bit platforms (Nick Craig-Wood)
    Make --box-commit-retries flag defaulting to 100 to fix large uploads (Nick Craig-Wood)
Drive
    Add --drive-keep-revision-forever flag (lewapm)
    Handle gdocs when filtering file names in list (Fabian Möller)
    Support using --fast-list for large speedups (Fabian Möller)
FTP
    Fix Put mkParentDir failed: 521 for BunnyCDN (Nick Craig-Wood)
Google Cloud Storage
    Fix index out of range error with --fast-list (Nick Craig-Wood)
Jottacloud
    Fix MD5 error check (Oliver Heyme)
    Handle empty time values (Martin Polden)
    Calculate missing MD5s (Oliver Heyme)
    Docs, fixes and tests for MD5 calculation (Nick Craig-Wood)
    Add optional MimeTyper interface. (Sebastian Bünger)
    Implement optional About interface (for df support). (Sebastian Bünger)
Mega
    Wait for events instead of arbitrary sleeping (Nick Craig-Wood)
    Add --mega-hard-delete flag (Nick Craig-Wood)
    Fix failed logins with upper case chars in email (Nick Craig-Wood)
Onedrive
    Shared folder support (Yoni Jah)
    Implement DirMove (Cnly)
    Fix rmdir sometimes deleting directories with contents (Nick Craig-Wood)
Pcloud
    Delete half uploaded files on upload error (Nick Craig-Wood)
Qingstor
    Remove leading / from paths (Nick Craig-Wood)
S3
    Fix index out of range error with --fast-list (Nick Craig-Wood)
    Add --s3-force-path-style (Nick Craig-Wood)
    Add support for KMS Key ID (bsteiss)
    Remove leading / from paths (Nick Craig-Wood)
Swift
    Add storage_policy (Ruben Vandamme)
    Make it so just storage_url or auth_token can be overidden (Nick Craig-Wood)
    Fix server side copy bug for unusal file names (Nick Craig-Wood)
    Remove leading / from paths (Nick Craig-Wood)
WebDAV
    Ensure we call MKCOL with a URL with a trailing / for QNAP interop (Nick Craig-Wood)
    If root ends with / then don’t check if it is a file (Nick Craig-Wood)
    Don’t accept redirects when reading metadata (Nick Craig-Wood)
    Add bearer token (Macaroon) support for dCache (Nick Craig-Wood)
    Document dCache and Macaroons (Onno Zweers)
    Sharepoint recursion with different depth (Henning)
    Attempt to remove failed uploads (Nick Craig-Wood)
Yandex
    Fix listing/deleting files in the root (Nick Craig-Wood)

v1.42 - 2018-06-16

New backends
    OpenDrive (Oliver Heyme, Jakub Karlicek, ncw)
New commands
    deletefile command (Filip Bartodziej)
New Features
    copy, move: Copy single files directly, don’t use --files-from work-around
        this makes them much more efficient
    Implement --max-transfer flag to quit transferring at a limit
        make exit code 8 for --max-transfer exceeded
    copy: copy empty source directories to destination (Ishuah Kariuki)
    check: Add --one-way flag (Kasper Byrdal Nielsen)
    Add siginfo handler for macOS for ctrl-T stats (kubatasiemski)
    rc
        add core/gc to run a garbage collection on demand
        enable go profiling by default on the --rc port
        return error from remote on failure
    lsf
        Add --absolute flag to add a leading / onto path names
        Add --csv flag for compliant CSV output
        Add ’m’ format specifier to show the MimeType
        Implement ‘i’ format for showing object ID
    lsjson
        Add MimeType to the output
        Add ID field to output to show Object ID
    Add --retries-sleep flag (Benjamin Joseph Dag)
    Oauth tidy up web page and error handling (Henning Surmeier)
Bug Fixes
    Password prompt output with --log-file fixed for unix (Filip Bartodziej)
    Calculate ModifyWindow each time on the fly to fix various problems (Stefan Breunig)
Mount
    Only print “File.rename error” if there actually is an error (Stefan Breunig)
    Delay rename if file has open writers instead of failing outright (Stefan Breunig)
    Ensure atexit gets run on interrupt
    macOS enhancements
        Make --noappledouble --noapplexattr
        Add --volname flag and remove special chars from it
        Make Get/List/Set/Remove xattr return ENOSYS for efficiency
        Make --daemon work for macOS without CGO
VFS
    Add --vfs-read-chunk-size and --vfs-read-chunk-size-limit (Fabian Möller)
    Fix ChangeNotify for new or changed folders (Fabian Möller)
Local
    Fix symlink/junction point directory handling under Windows
        NB you will need to add -L to your command line to copy files with reparse points
Cache
    Add non cached dirs on notifications (Remus Bunduc)
    Allow root to be expired from rc (Remus Bunduc)
    Clean remaining empty folders from temp upload path (Remus Bunduc)
    Cache lists using batch writes (Remus Bunduc)
    Use secure websockets for HTTPS Plex addresses (John Clayton)
    Reconnect plex websocket on failures (Remus Bunduc)
    Fix panic when running without plex configs (Remus Bunduc)
    Fix root folder caching (Remus Bunduc)
Crypt
    Check the crypted hash of files when uploading for extra data security
Dropbox
    Make Dropbox for business folders accessible using an initial / in the path
Google Cloud Storage
    Low level retry all operations if necessary
Google Drive
    Add --drive-acknowledge-abuse to download flagged files
    Add --drive-alternate-export to fix large doc export
    Don’t attempt to choose Team Drives when using rclone config create
    Fix change list polling with team drives
    Fix ChangeNotify for folders (Fabian Möller)
    Fix about (and df on a mount) for team drives
Onedrive
    Errorhandler for onedrive for business requests (Henning Surmeier)
S3
    Adjust upload concurrency with --s3-upload-concurrency (themylogin)
    Fix --s3-chunk-size which was always using the minimum
SFTP
    Add --ssh-path-override flag (Piotr Oleszczyk)
    Fix slow downloads for long latency connections
Webdav
    Add workarounds for biz.mail.ru
    Ignore Reason-Phrase in status line to fix 4shared (Rodrigo)
    Better error message generation

v1.41 - 2018-04-28

New backends
    Mega support added
    Webdav now supports SharePoint cookie authentication (hensur)
New commands
    link: create public link to files and folders (Stefan Breunig)
    about: gets quota info from a remote (a-roussos, ncw)
    hashsum: a generic tool for any hash to produce md5sum like output
New Features
    lsd: Add -R flag and fix and update docs for all ls commands
    ncdu: added a “refresh” key - CTRL-L (Keith Goldfarb)
    serve restic: Add append-only mode (Steve Kriss)
    serve restic: Disallow overwriting files in append-only mode (Alexander Neumann)
    serve restic: Print actual listener address (Matt Holt)
    size: Add --json flag (Matthew Holt)
    sync: implement --ignore-errors (Mateusz Pabian)
    dedupe: Add dedupe largest functionality (Richard Yang)
    fs: Extend SizeSuffix to include TB and PB for rclone about
    fs: add --dump goroutines and --dump openfiles for debugging
    rc: implement core/memstats to print internal memory usage info
    rc: new call rc/pid (Michael P. Dubner)
Compile
    Drop support for go1.6
Release
    Fix make tarball (Chih-Hsuan Yen)
Bug Fixes
    filter: fix --min-age and --max-age together check
    fs: limit MaxIdleConns and MaxIdleConnsPerHost in transport
    lsd,lsf: make sure all times we output are in local time
    rc: fix setting bwlimit to unlimited
    rc: take note of the --rc-addr flag too as per the docs
Mount
    Use About to return the correct disk total/used/free (eg in df)
    Set --attr-timeout default to 1s - fixes:
        rclone using too much memory
        rclone not serving files to samba
        excessive time listing directories
    Fix df -i (upstream fix)
VFS
    Filter files . and .. from directory listing
    Only make the VFS cache if --vfs-cache-mode > Off
Local
    Add --local-no-check-updated to disable updated file checks
    Retry remove on Windows sharing violation error
Cache
    Flush the memory cache after close
    Purge file data on notification
    Always forget parent dir for notifications
    Integrate with Plex websocket
    Add rc cache/stats (seuffert)
    Add info log on notification
Box
    Fix failure reading large directories - parse file/directory size as float
Dropbox
    Fix crypt+obfuscate on dropbox
    Fix repeatedly uploading the same files
FTP
    Work around strange response from box FTP server
    More workarounds for FTP servers to fix mkParentDir error
    Fix no error on listing non-existent directory
Google Cloud Storage
    Add service_account_credentials (Matt Holt)
    Detect bucket presence by listing it - minimises permissions needed
    Ignore zero length directory markers
Google Drive
    Add service_account_credentials (Matt Holt)
    Fix directory move leaving a hardlinked directory behind
    Return proper google errors when Opening files
    When initialized with a filepath, optional features used incorrect root path (Stefan Breunig)
HTTP
    Fix sync for servers which don’t return Content-Length in HEAD
Onedrive
    Add QuickXorHash support for OneDrive for business
    Fix socket leak in multipart session upload
S3
    Look in S3 named profile files for credentials
    Add --s3-disable-checksum to disable checksum uploading (Chris Redekop)
    Hierarchical configuration support (Giri Badanahatti)
    Add in config for all the supported S3 providers
    Add One Zone Infrequent Access storage class (Craig Rachel)
    Add --use-server-modtime support (Peter Baumgartner)
    Add --s3-chunk-size option to control multipart uploads
    Ignore zero length directory markers
SFTP
    Update docs to match code, fix typos and clarify disable_hashcheck prompt (Michael G. Noll)
    Update docs with Synology quirks
    Fail soft with a debug on hash failure
Swift
    Add --use-server-modtime support (Peter Baumgartner)
Webdav
    Support SharePoint cookie authentication (hensur)
    Strip leading and trailing / off root

v1.40 - 2018-03-19

New backends
    Alias backend to create aliases for existing remote names (Fabian Möller)
New commands
    lsf: list for parsing purposes (Jakub Tasiemski)
        by default this is a simple non recursive list of files and directories
        it can be configured to add more info in an easy to parse way
    serve restic: for serving a remote as a Restic REST endpoint
        This enables restic to use any backends that rclone can access
        Thanks Alexander Neumann for help, patches and review
    rc: enable the remote control of a running rclone
        The running rclone must be started with --rc and related flags.
        Currently there is support for bwlimit, and flushing for mount and cache.
New Features
    --max-delete flag to add a delete threshold (Bjørn Erik Pedersen)
    All backends now support RangeOption for ranged Open
        cat: Use RangeOption for limited fetches to make more efficient
        cryptcheck: make reading of nonce more efficient with RangeOption
    serve http/webdav/restic
        support SSL/TLS
        add --user --pass and --htpasswd for authentication
    copy/move: detect file size change during copy/move and abort transfer (ishuah)
    cryptdecode: added option to return encrypted file names. (ishuah)
    lsjson: add --encrypted to show encrypted name (Jakub Tasiemski)
    Add --stats-file-name-length to specify the printed file name length for stats (Will Gunn)
Compile
    Code base was shuffled and factored
        backends moved into a backend directory
        large packages split up
        See the CONTRIBUTING.md doc for info as to what lives where now
    Update to using go1.10 as the default go version
    Implement daily full integration tests
Release
    Include a source tarball and sign it and the binaries
    Sign the git tags as part of the release process
    Add .deb and .rpm packages as part of the build
    Make a beta release for all branches on the main repo (but not pull requests)
Bug Fixes
    config: fixes errors on non existing config by loading config file only on first access
    config: retry saving the config after failure (Mateusz)
    sync: when using --backup-dir don’t delete files if we can’t set their modtime
        this fixes odd behaviour with Dropbox and --backup-dir
    fshttp: fix idle timeouts for HTTP connections
    serve http: fix serving files with : in - fixes
    Fix --exclude-if-present to ignore directories which it doesn’t have permission for (Iakov Davydov)
    Make accounting work properly with crypt and b2
    remove --no-traverse flag because it is obsolete
Mount
    Add --attr-timeout flag to control attribute caching in kernel
        this now defaults to 0 which is correct but less efficient
        see the mount docs for more info
    Add --daemon flag to allow mount to run in the background (ishuah)
    Fix: Return ENOSYS rather than EIO on attempted link
        This fixes FileZilla accessing an rclone mount served over sftp.
    Fix setting modtime twice
    Mount tests now run on CI for Linux (mount & cmount)/Mac/Windows
    Many bugs fixed in the VFS layer - see below
VFS
    Many fixes for --vfs-cache-mode writes and above
        Update cached copy if we know it has changed (fixes stale data)
        Clean path names before using them in the cache
        Disable cache cleaner if --vfs-cache-poll-interval=0
        Fill and clean the cache immediately on startup
    Fix Windows opening every file when it stats the file
    Fix applying modtime for an open Write Handle
    Fix creation of files when truncating
    Write 0 bytes when flushing unwritten handles to avoid race conditions in FUSE
    Downgrade “poll-interval is not supported” message to Info
    Make OpenFile and friends return EINVAL if O_RDONLY and O_TRUNC
Local
    Downgrade “invalid cross-device link: trying copy” to debug
    Make DirMove return fs.ErrorCantDirMove to allow fallback to Copy for cross device
    Fix race conditions updating the hashes
Cache
    Add support for polling - cache will update when remote changes on supported backends
    Reduce log level for Plex api
    Fix dir cache issue
    Implement --cache-db-wait-time flag
    Improve efficiency with RangeOption and RangeSeek
    Fix dirmove with temp fs enabled
    Notify vfs when using temp fs
    Offline uploading
    Remote control support for path flushing
Amazon cloud drive
    Rclone no longer has any working keys - disable integration tests
    Implement DirChangeNotify to notify cache/vfs/mount of changes
Azureblob
    Don’t check for bucket/container presense if listing was OK
        this makes rclone do one less request per invocation
    Improve accounting for chunked uploads
Backblaze B2
    Don’t check for bucket/container presense if listing was OK
        this makes rclone do one less request per invocation
Box
    Improve accounting for chunked uploads
Dropbox
    Fix custom oauth client parameters
Google Cloud Storage
    Don’t check for bucket/container presense if listing was OK
        this makes rclone do one less request per invocation
Google Drive
    Migrate to api v3 (Fabian Möller)
    Add scope configuration and root folder selection
    Add --drive-impersonate for service accounts
        thanks to everyone who tested, explored and contributed docs
    Add --drive-use-created-date to use created date as modified date (nbuchanan)
    Request the export formats only when required
        This makes rclone quicker when there are no google docs
    Fix finding paths with latin1 chars (a workaround for a drive bug)
    Fix copying of a single Google doc file
    Fix --drive-auth-owner-only to look in all directories
HTTP
    Fix handling of directories with & in
Onedrive
    Removed upload cutoff and always do session uploads
        this stops the creation of multiple versions on business onedrive
    Overwrite object size value with real size when reading file. (Victor)
        this fixes oddities when onedrive misreports the size of images
Pcloud
    Remove unused chunked upload flag and code
Qingstor
    Don’t check for bucket/container presense if listing was OK
        this makes rclone do one less request per invocation
S3
    Support hashes for multipart files (Chris Redekop)
    Initial support for IBM COS (S3) (Giri Badanahatti)
    Update docs to discourage use of v2 auth with CEPH and others
    Don’t check for bucket/container presense if listing was OK
        this makes rclone do one less request per invocation
    Fix server side copy and set modtime on files with + in
SFTP
    Add option to disable remote hash check command execution (Jon Fautley)
    Add --sftp-ask-password flag to prompt for password when needed (Leo R. Lundgren)
    Add set_modtime configuration option
    Fix following of symlinks
    Fix reading config file outside of Fs setup
    Fix reading $USER in username fallback not $HOME
    Fix running under crontab - Use correct OS way of reading username
Swift
    Fix refresh of authentication token
        in v1.39 a bug was introduced which ignored new tokens - this fixes it
    Fix extra HEAD transaction when uploading a new file
    Don’t check for bucket/container presense if listing was OK
        this makes rclone do one less request per invocation
Webdav
    Add new time formats to support mydrive.ch and others

Enhancement: Automatic re-mount of rclone and Fake Cache Option

Hi Mads,

The Docker container loses from time to time the mount of the cloud folder. Could you add a routine that checks if the cloud is still mounted and if not remount it automatically? In addition to that a docker exec command would be also beneficial. "docker exec cms remount"

When you lose the mount, all services which are connected to it loses the content database. Plex/sonarr for example need to rescan the entire database to check if all files are available again to work properly. Wouldn't it be possible to generate a fake cache structure?

Cheers
C

Plexdrive version outdated

Hey,

Your plexdrive version is outdated, as of the first of august, the newest version is 5.0.0 and that version replaced mongodb with boltdb (single file)

Now the cache file with boltdb is attached like this "--cache-file /path/to/cache-file"

Ive made a fork to all this, and made it to work with the new plexdrive version, you can look at it if you want to, keep in mind tho that my fork just "works" its not bug free i think

Issues with rclone options on cloud upload

I just recently built this out and was having issues with the upload portion. I investigated what the command was using to upload and noticed that many of the options were drawn from the variables file. A lot of these options are not present inside of the rclone copy command and would fail the upload as a result of them missing (despite not indicating as such).

For reference, the upload with the configurable options: https://github.com/madslundt/docker-cloud-media-scripts/blob/master/scripts/cloudupload.script#L43

And the variables reference: https://github.com/madslundt/docker-cloud-media-scripts/blob/master/scripts/variables#L10

It was enough for me to duplicate the rclone_config into an upload variable (although duplicate parameters, but to allow for future flexibility in options).

Plans for Plexdrive 5.x?

I am curious if there are plans for updating this to Plexdrive v5? I've had quite noticeably improved performance and lower memory consumption on the 5.x release.

If you don't have plans, would there be a willingness to merge my PR if I did the work?

Enhancement: Script change - Add verbosity options for logging

I have a request to add a verbose option to the docker containers. I'd like to have the ability to see more output for troubleshooting etc. I think this could also apply to rmlocal since that involves the move?

My idea is:

  • Add a env variable for log_verbose or something along that name I'd think.
  • Make current behavior default
  • Add -v=x option to $rclone_options based on above env variable.
  • Example update, not positive if coded properly. master...Dulanic:master

Missing colon (:) in RCLONE_CLOUD_ENDPOINT ("gd-crypt:") error with docker-compose

When RCLONE_CLOUD_ENDPOINT is defined in docker-compose.yml, cloudupload and check scripts give following error:

Missing colon (:) in RCLONE_CLOUD_ENDPOINT ("gd-crypt:")
Run: docker exec -ti <DOCKER_CONTAINER> rclone_setup

When RCLONE_CLOUD_ENDPOINT is not defined (and using default name gd-crypt when setting up rclone) it works without problems.

I'm using cloud-media-scripts without encryption but I assume this happens also when using encryption.

I also saw this on another issue #19

Here are my configs:

docker-compose.yml

cloud-media-scripts:
image: madslundt/cloud-media-scripts
restart: always
container_name: cloud-media-scripts
volumes:
- /opt/gdrive:/local-media:shared
- /opt/appdata/cloud-media-scripts/external/media:/local-decrypt:shared
- /opt/appdata/cloud-media-scripts/config:/config
- /opt/appdata/cloud-media-scripts/external/plexdrive:/chunks
- /opt/appdata/cloud-media-scripts/data/db:/data/db
- /opt/appdata/cloud-media-scripts/logs:/log
- /opt/appdata/cloud-media-scripts/external/cloud-encrypt:/cloud-encrypt:shared
- /opt/appdata/cloud-media-scripts/external/cloud-decrypt:/cloud-decrypt:shared
environment:
- ENCRYPT_MEDIA=0
- RCLONE_CLOUD_ENDPOINT="gd-crypt:"
- PGID=998
- PUID=999
privileged: true
devices:
- /dev/fuse
cap_add:
- mknod
- sys_admin

rclone.conf

[gd-crypt]
type = drive
client_id =
client_secret =
token = {"access_token":"snip"

Versions:

Operating System: Debian GNU/Linux 9 (stretch)
Kernel: Linux 4.9.0-4-amd64
Docker version 17.12.0-ce, build c97c6d6
docker-compose version 1.18.0, build 8dd22a9
latest image from docker hub

[Enhancement] Folder for plexdrive on non encrypted mount

Please, modify plexdrive upload script to allow / fix mount the folder specified in RCLONE_CLOUD_ENDPOINT if encryption are disabled.
Actually:
plexdrive $mongo $plexdrive_options "${cloud_dir}" &
Fixed:
We need to determine folder id or use another environment like PLEXDRIVE_CLOUD_ENDPOINT
plexdrive $mongo $plexdrive_options "${cloud_dir}" --root-node-id $folder_id_to_mount &

Stuck on "Mounting union: /local-media"

Hi,
After configure Rclone & Plexdrive I'm stuck on "Mounting union: /local-media":

docker logs -f :

"Waiting for mount /cloud-encrypt ...
[ 2019-08-29@07:32:46 ] Mounting decrypted Google Drive: /cloud-decrypt
2019/08/29 07:32:46 NOTICE: Encrypted drive 'local-crypt:': poll-interval is not supported by this remote
[ 2019-08-29@07:32:51 ] Mounting union: /local-media"

Rclone conf :

[gd]
type = drive
client_id =
client_secret =
service_account_file =
token = <token>
team_drive = <team_drive>

[gd-crypt]
type = crypt
remote = gd:
filename_encryption = standard
directory_name_encryption = true
password = <pass>
password2 = <pass2>

[local-crypt]
type = crypt
remote = /cloud-encrypt/
filename_encryption = standard
directory_name_encryption = true
password = <pass>
password2 = <pass2>

In docker container:

root@ca975ebd9882:~# ll /cloud-decrypt/
total 0
root@ca975ebd9882:~# rclone --config "/config/rclone.conf" lsd gd-crypt:
          -1 2019-08-29 22:00:04        -1 movies

Any solution ?
Thanks

Cloud mount goes away

I have been using this container for nearly 3 years now and it has been great, however I have noticed that a few times in the last month I have seemingly lost the mount to Google Drive and the container needs to be restarted. The log (docker logs -f cloud-media-scripts) doesn't show anything. Is there any way to enable debug logging or another place to look for info? Thanks!

Question about default configuration

First of all thank you for your work.

I noticed that with the default configurations, I could not launch a movie by viewing it from its half.

What type of setting should I choose (chunks, buffer, ...) if I am looking to view large files (~ 25GB)

Hardware Spec :
Processor: Intel Xeon E3-1225v2 - (4c/4t)
Frequency: 3.2GHz / 3.6GHz
RAM: 16GB DDR3 1333MHz
Discs: 2x2To SATA
Bandwidth: 250 Mbps

Issue mounting local-crypt

Hi,

Think there is a issue with the last rclone :

[ 2018-01-31@00:59:28 ] Mounting Google Drive mountpoint: /cloud-encrypt
[ 2018-01-31@00:59:33 ] Mounting decrypted Google Drive: /cloud-decrypt
2018/01/31 00:59:33 NOTICE: Encrypted drive 'local-crypt:': poll-interval is not supported by this remote
[ 2018-01-31@00:59:38 ] Mounting union: /local-media

I connected to the container with docker exec -ti <DOCKER_CONTAINER> bash
Then i tried the following command :
rclone --config /config/rclone.conf lsd local-crypt:
image
So far, this is the expected behavior.

But when I run a ll cloud-decrypt/, it results in :
image

Could this poll-interval option be refusing the mount ?
How can i debug further ?

Regards,
Bounty1342

Data loading to /local-media on hard drive?

So I've been running this for 5 days and suddenly I noticed my sda drive is full. I couldn't figure it out until I stopped the docker and noticed there was still 135GB in local-media? I ran a rclone move and it removed them all because they all existed on the cloud? Seems odd why this happened?

Now that I looked at some of this, most of these look like items I thought I deleted? Is it maybe downloading it back down to the local dir if I delete it from local-media?

me@me:/local-media$ sudo du -d 2 -h
48M     ./Media/Books
2.2G    ./Media/Shows
104G    ./Media/Movies
106G    ./Media
106G    .

Fuse folder /local-media can see files in container but not in host mapped to /media

my fuse of two folders in the container is working just fine but when I browse the host and look I am not seeing the fused files in the folder I have mapped.
in the container /local-decrypt and /cloud-decrypt are fused together to /local-media
/local-media in the container is mapped to /media in the host
from within the container I can see files in /local-media but not from /media in the host

Stuck on "Waiting for /cloud-encrypt"

I've followed the instructions in the readme but have not been able to complete the process. When I get to the point of running docker exec -ti <container> plexdrive_setup it hangs at:

Paste the authorization code: 

This is after completing the oauth flow succesfuly. I am using the same id and secret that I used for rclone.

Output of check command:

root@scw-7f0a37:~# docker exec -ti 64df37e189b0b392ac4645a1fb8c27ef882a088248b87fba4a8b12f040467932  check
Plexdrive is not running

This command:

root@scw-7f0a37:~# docker exec -ti 64df37e189b0b392ac4645a1fb8c27ef882a088248b87fba4a8b12f040467932  cat /config/token.json

outputs a JSON file with an access and refresh token.


I ran through the commands verbatim, changing nothing

History file:

  8  ./create_container.sh
   ...
   17  docker start 64df37e189b0
  ...
    2  docker exec -ti 64df37e189b0b392ac4645a1fb8c27ef882a088248b87fba4a8b12f040467932  rclone_setup
    3  docker exec -ti 64df37e189b0b392ac4645a1fb8c27ef882a088248b87fba4a8b12f040467932  plexdrive_setup
    4  docker exec -ti 64df37e189b0b392ac4645a1fb8c27ef882a088248b87fba4a8b12f040467932  check
    5  docker exec -ti 64df37e189b0b392ac4645a1fb8c27ef882a088248b87fba4a8b12f040467932  plexdrive_setup
    6  docker exec -ti 64df37e189b0b392ac4645a1fb8c27ef882a088248b87fba4a8b12f040467932  cat /config/token.json
#create_container.sh
root@scw-7f0a37:~# cat create_container.sh
#!/usr/bin/env bash

docker create \
--name cloud-media-scripts \
-v /media:/local-media:shared \
-v /mnt/external/media:/local-decrypt:shared \
-v /configurations:/config \
-v /mnt/external/plexdrive:/chunks \
-v /logs:/log \
--privileged --cap-add=MKNOD --cap-add=SYS_ADMIN --device=/dev/fuse \
madslundt/cloud-media-scripts

What am I missing here? Does the provided example for creating the container match with the setup commandS?

Issue removing file on media folder

Hi there,

Removing a file with the 'rm' command dosen't seems to work.
If the contener restart, the rm file will show up again.

Also from time to time, it will dowload to the local folder a file already in the cloud (Gdrive), any idea why ?

Regards,
Bounty1342

ls: cannot access '/cloud-encrypt': Transport endpoint is not connected

Hey there.

I have tried for several days to follow you tutorial to make this awesome docker container to work, but I keep getting this error.

Plexdrive is not running ls: cannot access '/cloud-encrypt': Transport endpoint is not connected Waiting for /cloud-encrypt

My docker create command looks like this.
docker create
--name cloud-media-scripts
-v /volume1/docker/Cloud-Media/media:/local-media:shared
-v /volume1/docker/Cloud-Media/mnt/external/media:/local-decrypt:shared
-v /volume1/docker/Cloud-Media/configurations:/config
-v /volume1/docker/Cloud-Media/mnt/external/plexdrive:/chunks
-v /volume1/docker/Cloud-Media/logs:/log
-v /volume1/docker/Cloud-Media/cloud-encrypt:/cloud-encrypt:shared
-e PGID="1000"
-e PUID="1000"
-e CLEAR_CHUNK_MAX_SIZE="200G"
--privileged --cap-add=MKNOD --cap-add=SYS_ADMIN --device=/dev/fuse
madslundt/cloud-media-scripts
I have tried both with and without the PGID & PUID but with the same result.

Any good idea on what I am doing wrong?

THX in Advance

EDIT: BTW i am running this container gonna Synology 916+ 8gb.
But have tried on a ubuntu vm with the same problem.

Everything seems okay but folders stay empty.

I set up everything accordings to the docs on my Synology using Docker.
When I run the check command it returns "Everything looks good".
The only thing I can get to work is an encrypted google drive upload, when I put files in the local-decrypt folder and manually start the upload. Other folders stay completely empty. Also no logs are in my /logs folder.

Here's my docker command:

docker create \

--name cloud-media-scripts \

-v /docker/plexcloud/logs:/logs \

-v /docker/plexcloud/data/db:/data/db \

-v /docker/plexcloud/config:/config \

-v /HDD1/PlexCloud/local-media:/local-media:shared \

-v /HDD1/PlexCloud/local-decrypt:/local-decrypt:shared \

-v /HDD1/PlexCloud/cloud-encrypt:/cloud-encrypt \

-v /HDD1/PlexCloud/cloud-decrypt:/cloud-decrypt:shared \

-v /HDD1/PlexCloud/chunks:/chunks \

--privileged --cap-add=MKNOD --cap-add=SYS_ADMIN --device=/dev/fuse \

madslundt/cloud-media-scripts

So these three folders stay empty:

- /HDD1/PlexCloud/local-media

- /HDD1/PlexCloud/cloud-encrypt

- /HDD1/PlexCloud/cloud-decrypt

Any idea what's going wrong here?

ls: cannot access '/cloud-encrypt': Transport endpoint is not connected

Initially entered the following docker commands to download and create volumes for cloud-media-scripts

sudo docker pull madslundt/cloud-media-scripts
sudo docker volume create --name local-media
sudo docker volume create --name local-decrypt
sudo docker volume create --name config
sudo docker volume create --name chunks
sudo docker volume create --name datadb
sudo docker volume create --name log
sudo docker volume create --name cloud-encrypt
sudo docker volume create --name cloud-decrypt

Then

ran

sudo docker create --name cloud-media-scripts
--restart=always
--net=lsio
--privileged --cap-add=MKNOD --cap-add=SYS_ADMIN --device=/dev/fuse
-e PUID=0
-e PGID=0
-e CLEAR_CHUNK_MAX_SIZE="1000G"
-e REMOVE_LOCAL_FILES_WHEN_SPACE_EXCEEDS_GB="2000"
-e FREEUP_ATLEAST_GB="1000"
-v "/share/media":/local-media:shared
-v /mnt/external/media:/local-decrypt:shared
-v /mnt/external/media/cloud-encrypt:/cloud-encrypt:shared
-v "/share/appdata/cms":/config
-v /mnt/external/plexdrive:/chunks
-v /logs:/log
madslundt/cloud-media-scripts

I then successfully ran

sudo docker exec -ti cloud-media-scripts rclone_setup

following instructions here:

https://github.com/madslundt/docker-cloud-media-scripts

and then successfully ran

sudo docker exec -ti cloud-media-scripts plexdrive_setup.

Now when i start the docker container, I am seeing:

User gid: 0

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
Started mongod --logpath /log/mongod.log
Started mount
2019-10-23T16:15:39.390+0000 I CONTROL [main] log file "/log/mongod.log" exists; moved to "/log/mongod.log.2019-10-23T16-15-39".
[ 2019-10-23@16:15:45 ] Mounting Google Drive mountpoint: /cloud-encrypt
ls: cannot access '/cloud-encrypt': Transport endpoint is not connected
Waiting for mount /cloud-encrypt ...

I tried via ssh to execute

fusermount -uz /mnt/external/media/cloud-encrypt

which executed successfully. I then stopped and restarted the container and still get the error message

ls: cannot access '/cloud-encrypt': Transport endpoint is not connected

Can you tell me how to fix this error?

What is the best way to contact you for your assistance? Discord? If so, what ID?

Howto docker-cloud-media-scripts on Admiral

I thought I would contribute some instructions that help make this an easier build. Feel free to change and edit this at will.

Photon Admiral How-to with docker-cloud-media-scripts Bonus (Extreme Rough Draft)

First of all I would like to thank the author of these sites for a good portion of my information:

https://blogs.vmware.com/cloudnative/2016/10/03/getting-started-vmware-admiral-container-service-photon-os/

http://cormachogan.com/2016/04/07/getting-started-photon-os-vsphere-integrated-containers/

http://www.vmtocloud.com/how-to-configure-photon-os-to-auto-start-containers-at-boot-time/

http://www.vmtocloud.com/how-to-enable-docker-remote-api-on-photon-os/

https://docs.docker.com/docker-cloud/apps/stack-yaml-reference/#cap_add-cap_drop

Note about the infrastructure

I built this using Vmware

We will use 1 VM for Admiral and 2 VM’s configured with more resources

I used the stock OVA hardware configuration for admiral
I beefed the cpu, mem and storage for the 2 hosts

Setup of the Photon OS VM’s

Download Photon OS

Download Link - https://vmware.github.io/photon/

I downloaded the OVA and installed to Vmware

I used the console to login and change password then ifconfig to get the IP then SSH in.

Configure a Static IP

cd /etc/systemd/network
mv 10-dhcp-en.network 10-static-en.network
vi 10-static-en.network

This is an example of my 10-static-en.network file

[Match]
Name=e*

[Network]
DHCP=no
Address=192.168.1.5/24
Gateway=192.168.1.1
DNS=8.8.8.8 8.8.8.8
Domains=contoso.com
NTP=time-a.nist.gov


Configure a hostname

cd /etc
vi hostname

Admiral VM Configuration

Start and Enable Docker on Boot

systemctl start docker
systemctl enable docker

Build and run Admiral

docker run -d -p 8282:8282 --name admiral vmware/admiral

Make admiral docker start on boot

vi /etc/systemd/system/docker-admiral.service

My docker-admiral.service file

[Unit]
Description=Admiral container
Requires=docker.service
After=docker.service

[Service]
Restart=always
ExecStart=/usr/bin/docker start -a admiral
ExecStop=/usr/bin/docker stop -t 2 admiral

[Install]
WantedBy=default.target


Enable service at boot time

systemctl enable docker-admiral.service

Build Host

Follow the static ip and hostname section again for each host

Enable remote api

vi /etc/default/docker

my docker file

DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock"
–-----------------------------------------

Make API port persistent

vi /etc/systemd/scripts/iptables

Go to the end of the file before # End and put these two lines

#Enable Docker API
iptables -A INPUT -p tcp --dport 2375 -j ACCEPT

systemctl start docker
systemctl enable docker

Browse to Admiral Host IP:8282

follow the guide for adding hosts from this site :
https://blogs.vmware.com/cloudnative/2016/10/03/getting-started-vmware-admiral-container-service-photon-os/

Install madslundt/docker-cloud-media-scripts

it goes without saying that you should read this page :
https://github.com/madslundt/docker-cloud-media-scripts
It is super easy with Admiral

This information was gathered with big help from

davidjameshowell from the docker-cloud-media-scripts project
Everyone in https://gitter.im/project-admiral/Lobby
Specially Stanislav Hadjiiski

Only certain options were exposed to the UI for various reasons
You can build everything with a Blueprint YAML

Build your Blueprint YAML

My YAML file (If the formatting is not there it will not work, I am attaching the YAML file to this as well.)
(Make sure to include the 3 dashes at the top of the YAML script)
NOTE: it has the following options included [--privileged --cap-add=MKNOD --cap-add=SYS_ADMIN --device=/dev/fuse]

docker-cloud-media-scripts.yaml.TXT (Don't forget to change the extension to YAML)



name: "cloud-media-scripts"
components:
cloud-media-scripts:
type: "App.Container"
data:
name: "cloud-media-scripts"
image: "registry.hub.docker.com/madslundt/cloud-media-scripts"
_cluster: 1
privileged: true
cap_add:
- MKNOD
- SYS_ADMIN
device:
- "/dev/fuse:/dev/fuse"
env:
- var: "CLEAR_CHUNK_MAX_SIZE"
value: ""100""
- var: "REMOVE_LOCAL_FILES_WHEN_SPACE_EXCEEDS_GB"
value: ""350""
- var: "FREEUP_ATLEAST_GB"
value: ""100""
volumes:
- "/media:/local-media:shared"
- "/mnt/external/media:/local-decrypt:shared"
- "/configurations:/config"
- "/mnt/external/plexdrive:/chunks"
- "/logs:/log"
publish_all: true
restart_policy: "no"

Browse to Admiral IP:8282
goto Templates then click on templates
Click Import template or Docker Compose

Load your YAML file then follow the rest of the setup for Rclone and Plexdrive and everything else on this page:
https://github.com/madslundt/docker-cloud-media-scripts

You now have madslundt/docker-cloud-media-scripts running in Vmware Admiral!

Again Big Shout out to -

davidjameshowell from the docker-cloud-media-scripts project
Everyone in https://gitter.im/project-admiral/Lobby
Specially Stanislav Hadjiiski (Seriously I couldn't have done this without you)

Not uploading

I passed through the variable:
-e REMOVE_LOCAL_FILES_WHEN_SPACE_EXCEEDS_GB="200" \
but I'm sitting at 400+GB and no uploading has taken place yet.

When I run "docker logs cloud-media-scripts -f", I see this:
`[cont-init.d] executing container initialization scripts...
[cont-init.d] 10-adduser: executing...

GID/UID

User uid: 1000
User gid: 1000

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
Executing mongod --logpath /log/mongod.log
Executing mount
2017-09-08T18:27:56.858+0000 I CONTROL [main] log file "/log/mongod.log" exists; moved to "/log/mongod.log.2017-09-08T18-27-56".
[ 2017-09-08@18:28:01 ] Mounting Google Drive mountpoint: /cloud-decrypt
[ 2017-09-08@18:28:06 ] Mounting union: /local-media
`
Seems hung at "Mounting union" but the union appears to work fine.

Any ideas? Running the currently posted commit but I've been removing/re-adding your commits for the past week or so. Thanks!

All local content removed

I've no idea what happened but all my local content was removed. Everything was already uploaded to the cloud but I should still have around 5 TB of content stored locally.

Is there any log available ?

Cloud-decrypt : Input/output error

Hi,

When I do a ls on the cloud-decrypt file, it gives an error message:
ll cloud-decrypt

Strange thing is that when doing a rclone lsd on the drive, the folder are there :
rclone lsd gd-crypt

Any idea how to give more information do help debug it ?

Slow upload due to existing files

The cloudupload script takes a lot of time to execute even if there are no new files to upload. This is mainly due to the rclone copy call on each file. If the files already exists on the remote target, it will still require from 3 to 10 secondes (per file !) to check it. With my (no so large) library with 1200 files, it already requires more than 3 hours to simply list and check if files already exist.

Wouldn't it make sense to first compare files from /cloud-decrypt and /local-decrypt and only upload those who only exist locally ? I guess comparing file name and size should be enough to define if a file is different. What do you think ?

So ideas

So what you have looks promising (added a star to yours). I want to try to fit that in. I dockered the entire plex server including programs except the rclone portion. I'm running into the rclone ban issue. If you have a second, take a look and let me know what you think:

https://github.com/Admin9705/The-Awesome-Plex-Server

I'll look into your docker and see how I can squeeze into it. Server setup works great. Running with 147TB with a duo xeon and 0 issues with constant downloads.

Added your link to the page also

GDrive Bans?

So I know this isn't per say an issue /w this docker, or maybe it could be, but something is up and I don't know how to research it further.... within under an hour of use I am getting banned every single day. My total upload to my GDrive today was 27GB, well under the 750GB. And you can see I barely used my API? It worked this morning as I started one movie to test on plex.

traffic (1).xlsx
errors.xlsx

Why would they keep blocking this?

2017/10/06 13:27:53 ERROR : Media/Movies/**Redacted**/**Redacted** HDTV-1080p.mkv: ReadFileHandle.Read error: low level retry 8/10: bad response: 403: 403 Forbidden 2017/10/06 13:27:53 ERROR : Media/Movies/**Redacted**/**Redacted** HDTV-1080p.mkv: ReadFileHandle.Read error: low level retry 9/10: bad response: 403: 403 Forbidden 2017/10/06 13:27:53 ERROR : Media/Movies/**Redacted**/**Redacted** HDTV-1080p.mkv: ReadFileHandle.Read error: low level retry 10/10: bad response: 403: 403 Forbidden 2017/10/06 13:27:54 ERROR : Media/Movies/**Redacted**/**Redacted** HDTV-1080p.mkv: ReadFileHandle.Read error: bad response: 403: 403 Forbidden

Started mount loop

I have a continuous loop like this:

Started mount
mkdir: cannot create directory '/cloud-decrypt': File exists
[ 2019-04-04@14:19:53 ] Mounting Google Drive mountpoint: /cloud-decrypt
[ 2019-04-04@14:19:58 ] Union mountpoint: /local-media already mounted.
Started mount
mkdir: cannot create directory '/cloud-decrypt': File exists
[ 2019-04-04@14:23:02 ] Mounting Google Drive mountpoint: /cloud-decrypt
[ 2019-04-04@14:23:07 ] Union mountpoint: /local-media already mounted.
Started mount
mkdir: cannot create directory '/cloud-decrypt': File exists
[ 2019-04-04@14:23:24 ] Mounting Google Drive mountpoint: /cloud-decrypt
[ 2019-04-04@14:23:29 ] Union mountpoint: /local-media already mounted.
Started mount
mkdir: cannot create directory '/cloud-decrypt': File exists
[ 2019-04-04@14:23:40 ] Mounting Google Drive mountpoint: /cloud-decrypt
[ 2019-04-04@14:23:45 ] Union mountpoint: /local-media already mounted.
Started mount
mkdir: cannot create directory '/cloud-decrypt': File exists
[ 2019-04-04@14:24:02 ] Mounting Google Drive mountpoint: /cloud-decrypt
[ 2019-04-04@14:24:07 ] Union mountpoint: /local-media already mounted.

I have configured all the same as before this problem, now i cant see mount points on local machine and on the container.

Regards.

Other containers lose access to /local-media upon remounts?

I have noticieid a recent issue /w Plex and some other containers. When this container remounts the locations, the other dockers can no longer access it? Or at least plex? Is there something I can do to correct this issue?

Immediately after this docker remounts, this is the error I see from within the plex container:

root@mediaserver:/# cd data
bash: cd: data: Transport endpoint is not connected

container commands:

sudo docker create --name cloud --restart=always -v /local-media:/local-media:shared -v /local-decrypt:/local-decrypt:shared -v /docker/containers/cloud/config:/config -v /mnt/external/plexdrive:/chunks -v /var/log:/log -e CLEAR_CHUNK_MAX_SIZE="2000G" -e ENCRYPT_MEDIA="1" -e PGID=1000 -e PUID=1000 -e RCLONE_LOCAL_ENDPOINT="gd-crypt:" -e REMOVE_LOCAL_FILES_WHEN_SPACE_EXCEEDS_GB="1500" -e FREEUP_ATLEAST_GB="1000" -e TZ="America/Chicago" --privileged --cap-add=MKNOD --cap-add=SYS_ADMIN --device=/dev/fuse madslundt/cloud-media-scripts

sudo docker run -d --name plex --restart=always --network=host -e TZ="America/Chicago" -e PLEX_CLAIM="redacted" -v /docker/containers/plex/config:/config -v /mnt/downloads/transcode:/transcode -v /storage/Media:/data plexinc/pms-docker

deploy not possible with Portainer

Hi,

I successfully installed this script on Ubuntu 18.04.
Thanks to the author for building it!

I just pass by to share that I was not able to redeploy this container using Portainer after even a minor change (restart conditions).

Use this with docker-compose?

I would like to use add this to my docker-compose.yml along the rest of my applications however I am unsure how to translate this line into the format:

--privileged --cap-add=MKNOD --cap-add=SYS_ADMIN --device=/dev/fuse \

Any idea?

Thanks!

Enhancement: Media Deletion

So sometimes delete items from my cloud, especially since I use the DVR part of plex also. However, if I delete them from local-media they will remain on the cloud. Maybe I can already do this, but I just haven't figured it out yet.

Would it be possible to script it out to delete the files if deleted from local-media? Maybe an additional function or something along that line? Was thinking maybe using a if or diff to compare 2 mounts and then if deleted from local-media remove from the rclone mount?

This one might be a tad slow and I'm sure it could be improved on... but this is a start for me at least. I manually mounted the encrypted dir to /test to test this. Downside is since I don't know how the delete from local-media works since I know it doesn't delete from the cloud, I figure the file eventually needs to show back up? I'm new to Plexdrive and rclone so this is just a guess as to a starting point....

or i in "$(comm -23 <(cd /test; find . -type f |sort) <(cd /local-media; find . -type f |sort))"; do cd /test; rm "$i" done

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.