Giter Club home page Giter Club logo

anki-sync-server-rs's People

Contributors

dobefore avatar redmie avatar sagit-chu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

anki-sync-server-rs's Issues

/lib64/libm.so.6: version `GLIBC_2.xx' not found

[root@bogon ankisyncd_1.1.0_linux_x64]# ./ankisyncd
./ankisyncd: /lib64/libm.so.6: version `GLIBC_2.29' not found (required by ./ankisyncd)
./ankisyncd: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by ./ankisyncd)
./ankisyncd: /lib64/libc.so.6: version `GLIBC_2.25' not found (required by ./ankisyncd)
./ankisyncd: /lib64/libc.so.6: version `GLIBC_2.33' not found (required by ./ankisyncd)
./ankisyncd: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by ./ankisyncd)
./ankisyncd: /lib64/libc.so.6: version `GLIBC_2.28' not found (required by ./ankisyncd)
[root@bogon ankisyncd_1.1.0_linux_x64]#

how to solve the problem?

Logical error in sync.rs

In sync.rs in function get_resp_data line 384, the some of the branches of the if cannot be reached (as we unwrap on the checked result before this if).

We must replace this condition with pattern matching.

Investigate easier to upgrade dependency on anki lib

For now the anki folder contains a patched copy of anki source code. We use its internal rust library inside our server. This will be difficult to keep the library updated (we need to as rusqlite and config version are starting to lag behind and cannot be upgraded without upgrading anki).

I propose keeping just a patch for the anki source code and depending on it either as a submodule we can git apply on, or via cargo integrated patching system (see this).

cargo patch would fit the bill nicely but I am wary of issues like patch application and reliance on private changing apis. So I think that it is best we avoid it as our goal is to have an easy to maintain patch instead of the whole anki repository to take care of and upgrade.
As such mature tooling seems (to me at least) the way to go.

Server is prone to crashing

The high number of unwrap in the code makes server fragile.
Even if no system related error happens (missing space, ... ) happens, encountering any malformed data will make the server crash we MUST fix that.

See pull request #9 for a step in the right direction

After the merge we will be down to:

parse.rs:0
error.rs:0
user.rs:2
main.rs:6
db.rs:0
sync.rs:29
schema.sql:0
media.rs:27
session.rs:0

Heavy work is still needed on sync.rs and media.rs.
@dobefore Could you do it in spirit of #9?
I'm currently saturated by a week of unwrap fixing on this project, that was not pleasant to say the least.

Users cannot be dynamically added/无法动态添加用户

After the following command is executed to add the user

./ankisyncd user --add username password

Synchronization cannot be performed through the anki client because the anki user is specified during server initialization. Can you change it to dynamic?

===========

执行了下面的命令添加用户之后

./ankisyncd user --add username password

无法通过anki 客户端执行同步,因为anki的用户是在server 初始化的时候指定的,可否修改成动态指定?

Sync error

I tried to change the sync server from the Python server to this sync server with Anki desktop 2.1.49.
I changed the URL in the add-on to the new server and ran sync. Both normal and force sync produced an error.

ankisyncd 0.1.2 (latest master) in Docker container behind reverse proxy (local server produces same error)
RUST_BACKTRACE enabled
Default config, I just added one user

Normal sync:

[2021-12-11T19:43:04Z INFO  actix_server::builder] Starting 2 workers
[2021-12-11T19:43:04Z INFO  actix_server::builder] Starting "actix-web-service-0.0.0.0:27701" service on 0.0.0.0:27701
thread 'actix-rt:worker:0' panicked at 'called `Option::unwrap()` on a `None` value', src/sync.rs:388:38
stack backtrace:
   0: rust_begin_unwind
             at ./rustc/59eed8a2aac0230a8b53e89d4e99d55912ba6b35/library/std/src/panicking.rs:517:5
   1: core::panicking::panic_fmt
             at ./rustc/59eed8a2aac0230a8b53e89d4e99d55912ba6b35/library/core/src/panicking.rs:101:14
   2: core::panicking::panic
             at ./rustc/59eed8a2aac0230a8b53e89d4e99d55912ba6b35/library/core/src/panicking.rs:50:5
   3: ankisyncd::sync::sync_app::{{closure}}
   4: <actix_web::handler::HandlerServiceResponse<T,R> as core::future::future::Future>::poll
   5: <actix_web::handler::ExtractResponse<T,S> as core::future::future::Future>::poll
   6: <actix_web::handler::ExtractResponse<T,S> as core::future::future::Future>::poll
   7: <futures_util::future::future::map::Map<Fut,F> as core::future::future::Future>::poll
   8: <futures_util::future::future::Map<Fut,F> as core::future::future::Future>::poll
   9: <core::pin::Pin<P> as core::future::future::Future>::poll
  10: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
  11: <actix_web::middleware::logger::LoggerResponse<S,B> as core::future::future::Future>::poll
  12: actix_http::h1::dispatcher::InnerDispatcher<T,S,B,X,U>::handle_request
  13: actix_http::h1::dispatcher::InnerDispatcher<T,S,B,X,U>::poll_request
  14: <actix_http::h1::dispatcher::Dispatcher<T,S,B,X,U> as core::future::future::Future>::poll
  15: <actix_service::and_then::AndThenServiceResponse<A,B> as core::future::future::Future>::poll
  16: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  17: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
  18: std::panicking::try
  19: tokio::runtime::task::harness::Harness<T,S>::poll
  20: std::thread::local::LocalKey<T>::with
  21: tokio::task::local::LocalSet::tick
  22: tokio::macros::scoped_tls::ScopedKey<T>::set
  23: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  24: tokio::macros::scoped_tls::ScopedKey<T>::set
  25: tokio::runtime::basic_scheduler::BasicScheduler<P>::block_on
  26: tokio::runtime::handle::Handle::enter
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
thread 'actix-rt:worker:1' panicked at 'called `Option::unwrap()` on a `None` value', src/sync.rs:278:20
stack backtrace:
   0: rust_begin_unwind
             at ./rustc/59eed8a2aac0230a8b53e89d4e99d55912ba6b35/library/std/src/panicking.rs:517:5
   1: core::panicking::panic_fmt
             at ./rustc/59eed8a2aac0230a8b53e89d4e99d55912ba6b35/library/core/src/panicking.rs:101:14
   2: core::panicking::panic
             at ./rustc/59eed8a2aac0230a8b53e89d4e99d55912ba6b35/library/core/src/panicking.rs:50:5
   3: ankisyncd::sync::add_col
   4: ankisyncd::sync::sync_app::{{closure}}
   5: <actix_web::handler::HandlerServiceResponse<T,R> as core::future::future::Future>::poll
   6: <actix_web::handler::ExtractResponse<T,S> as core::future::future::Future>::poll
   7: <futures_util::future::future::map::Map<Fut,F> as core::future::future::Future>::poll
   8: <futures_util::future::future::Map<Fut,F> as core::future::future::Future>::poll
   9: <core::pin::Pin<P> as core::future::future::Future>::poll
  10: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
  11: <actix_web::middleware::logger::LoggerResponse<S,B> as core::future::future::Future>::poll
  12: actix_http::h1::dispatcher::InnerDispatcher<T,S,B,X,U>::poll_response
  13: <actix_http::h1::dispatcher::Dispatcher<T,S,B,X,U> as core::future::future::Future>::poll
  14: <actix_service::and_then::AndThenServiceResponse<A,B> as core::future::future::Future>::poll
  15: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  16: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
  17: std::panicking::try
  18: tokio::runtime::task::harness::Harness<T,S>::poll
  19: std::thread::local::LocalKey<T>::with
  20: tokio::task::local::LocalSet::tick
  21: tokio::macros::scoped_tls::ScopedKey<T>::set
  22: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  23: tokio::macros::scoped_tls::ScopedKey<T>::set
  24: tokio::runtime::basic_scheduler::BasicScheduler<P>::block_on
  25: tokio::runtime::handle::Handle::enter
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

With force sync in one direction enabled:

thread 'actix-rt:worker:0' panicked at 'called `Option::unwrap()` on a `None` value', src/sync.rs:278:20
stack backtrace:
   0: rust_begin_unwind
             at ./rustc/59eed8a2aac0230a8b53e89d4e99d55912ba6b35/library/std/src/panicking.rs:517:5
   1: core::panicking::panic_fmt
             at ./rustc/59eed8a2aac0230a8b53e89d4e99d55912ba6b35/library/core/src/panicking.rs:101:14
   2: core::panicking::panic
             at ./rustc/59eed8a2aac0230a8b53e89d4e99d55912ba6b35/library/core/src/panicking.rs:50:5
   3: ankisyncd::sync::add_col
   4: ankisyncd::sync::sync_app::{{closure}}
   5: <actix_web::handler::HandlerServiceResponse<T,R> as core::future::future::Future>::poll
   6: <actix_web::handler::ExtractResponse<T,S> as core::future::future::Future>::poll
   7: <futures_util::future::future::map::Map<Fut,F> as core::future::future::Future>::poll
   8: <futures_util::future::future::Map<Fut,F> as core::future::future::Future>::poll
   9: <core::pin::Pin<P> as core::future::future::Future>::poll
  10: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
  11: <actix_web::middleware::logger::LoggerResponse<S,B> as core::future::future::Future>::poll
  12: actix_http::h1::dispatcher::InnerDispatcher<T,S,B,X,U>::poll_response
  13: <actix_http::h1::dispatcher::Dispatcher<T,S,B,X,U> as core::future::future::Future>::poll
  14: <actix_service::and_then::AndThenServiceResponse<A,B> as core::future::future::Future>::poll
  15: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  16: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
  17: std::panicking::try
  18: tokio::runtime::task::harness::Harness<T,S>::poll
  19: std::thread::local::LocalKey<T>::with
  20: tokio::task::local::LocalSet::tick
  21: tokio::macros::scoped_tls::ScopedKey<T>::set
  22: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  23: tokio::macros::scoped_tls::ScopedKey<T>::set
  24: tokio::runtime::basic_scheduler::BasicScheduler<P>::block_on
  25: tokio::runtime::handle::Handle::enter
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
thread 'actix-rt:worker:1' panicked at 'called `Option::unwrap()` on a `None` value', src/sync.rs:388:38
stack backtrace:
   0: rust_begin_unwind
             at ./rustc/59eed8a2aac0230a8b53e89d4e99d55912ba6b35/library/std/src/panicking.rs:517:5
   1: core::panicking::panic_fmt
             at ./rustc/59eed8a2aac0230a8b53e89d4e99d55912ba6b35/library/core/src/panicking.rs:101:14
   2: core::panicking::panic
             at ./rustc/59eed8a2aac0230a8b53e89d4e99d55912ba6b35/library/core/src/panicking.rs:50:5
   3: ankisyncd::sync::sync_app::{{closure}}
   4: <actix_web::handler::HandlerServiceResponse<T,R> as core::future::future::Future>::poll
   5: <actix_web::handler::ExtractResponse<T,S> as core::future::future::Future>::poll
   6: <actix_web::handler::ExtractResponse<T,S> as core::future::future::Future>::poll
   7: <futures_util::future::future::map::Map<Fut,F> as core::future::future::Future>::poll
   8: <futures_util::future::future::Map<Fut,F> as core::future::future::Future>::poll
   9: <core::pin::Pin<P> as core::future::future::Future>::poll
  10: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
  11: <actix_web::middleware::logger::LoggerResponse<S,B> as core::future::future::Future>::poll
  12: actix_http::h1::dispatcher::InnerDispatcher<T,S,B,X,U>::handle_request
  13: actix_http::h1::dispatcher::InnerDispatcher<T,S,B,X,U>::poll_request
  14: <actix_http::h1::dispatcher::Dispatcher<T,S,B,X,U> as core::future::future::Future>::poll
  15: <actix_service::and_then::AndThenServiceResponse<A,B> as core::future::future::Future>::poll
  16: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  17: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
  18: std::panicking::try
  19: tokio::runtime::task::harness::Harness<T,S>::poll
  20: std::thread::local::LocalKey<T>::with
  21: tokio::task::local::LocalSet::tick
  22: tokio::macros::scoped_tls::ScopedKey<T>::set
  23: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  24: tokio::macros::scoped_tls::ScopedKey<T>::set
  25: tokio::runtime::basic_scheduler::BasicScheduler<P>::block_on
  26: tokio::runtime::handle::Handle::enter
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

different log info due to version bump

Since MR #18 was merged ,actix-web seems to behave differently.
for example,following results are running on Windows intel 8 cores.

before merging,program appears to start 8 processes,yet this was reduced to 4 processes.
after #18

[2022-03-27T13:11:34Z INFO  actix_server::builder] Starting 4 workers
[2022-03-27T13:11:34Z INFO  actix_server::server] Actix runtime found; starting in Actix runtime

before #18


[2022-03-27T13:16:36Z INFO  actix_server::builder] Starting 8 workers
[2022-03-27T13:16:36Z INFO  actix_server::builder] Starting "actix-web-service-0.0.0.0:27701" service on 0.0.0.0:27701

CI/CD system

Set up a CI system for build testing and QA before merging.

It is needed for #16 for example.
The CI must run pre-commit checks that will be introduced by #17.

Make dependence on rustls optional

I host on constrained resource behind a nginx reverse proxy that handles https.
As such I would benefit from having rustls as an optional dep. for reduced attack surface and smaller binary size.

This can be achieved by using cargo features flags (#[cfg(...)])

docker 0.2.6 version cannot run

Problem Description

Try to use docker compose to run version 0.2.6. When running, it prompts anki-container | /usr/local/bin/ankisyncd: error while loading shared libraries: libssl.so.3: cannot open shared object file: No such file or directory,The previous version was fine.

Docker Compose

version: "3"

services:
    anki-container:
        image: ankicommunity/anki-sync-server-rs:latest
        container_name: anki-container
        restart: unless-stopped
        environment:
          - ANKISYNCD_USERNAME=test
          - ANKISYNCD_PASSWORD=123456
        ports:
          - "27701:27701"

When modifying the save location of "root_dir", other files are also stored in "/app"

Encounter problems

Modify root_dir to "root_dir = "/home/anki/data" ", "auth.db, session.db" have been saved in "/home/anki/data", and create a user, but when logging in, it prompts "Authentication failed for nonexistent user", the user just created has been checked with "ankisyncd user --list", and the "root_dir" setting is restored, and it can be used normally.

idea

Ability to change the default storage location for data

"/app" stored file or directory

# ls /app
ankisyncd.toml  auth.db  collections  session.db
# 

Dockerfile suggestions

For anyone interested in a Dockerfile that doesn't need to compile for 15+ minutes. It is possible to specify the version as argument, otherwise the latest version is used.
Another option would be to directly publish a docker image on dockerhub (or other registry) by a CI job.
The current Dockerfile also doesn't specify a workdir so it's not easy to have a persistant storage by using a volume.

FROM alpine:latest
ARG VERSION
ARG PLATFORM=linux
WORKDIR /app
RUN if [[ -z "$VERSION" ]] ; then \
    wget -q https://api.github.com/repos/ankicommunity/anki-sync-server-rs/releases/latest -O - \
    | grep "browser_download_url.*tar.gz" | cut -d : -f 2,3 | tr -d \" | grep ${PLATFORM} | xargs wget -O ankisyncd.tar.gz -q; \
    else \
    wget https://github.com/ankicommunity/anki-sync-server-rs/releases/download/${VERSION}/ankisyncd-${VERSION}-${PLATFORM}.tar.gz -O ankisyncd.tar.gz; \
    fi \
    && tar -xf ankisyncd.tar.gz \
    && rm ankisyncd.tar.gz \
    && mv ankisyncd-${PLATFORM}/ankisyncd /usr/local/bin/ankisyncd \
    && mv ankisyncd-${PLATFORM}/Settings.toml . \
    && rmdir ankisyncd-${PLATFORM}

ENTRYPOINT ["ankisyncd"]
HEALTHCHECK --interval=30s --timeout=3s CMD wget --spider -q http://0.0.0.0:27701
EXPOSE 27701

Investigate automated testing

There is a need to automate testing in order to ensure the server works well with anki.
This task is hard.

In terms of sync tests we should have:

  1. Empty anki syncs to empty server
  2. Anki with text decks syncs to empty server
  3. Anki with more text decks sync to partially synced server server (from step 2)
  4. Anki media decks syncs to empty server
  5. Same as 3 for media decks
  6. Anki syncs decks with history that conflict server

In terms of client tested with should have:

  • anki desktop
  • ankidroid
  • anki on IOS (I see no way to automate testing for this one, not a priority for me)

The success conditions are:

  • No client error (or specific error if it is the test goal)
  • No server error
  • All data correctly synced

Please tell me if you see things I have forgotten.

Road to automated testing is long, I think the first steps are:

  • Run anki desktop in container
  • Control anki desktop in the container
  • Run ankidroid in container
  • Control ankidroid in container
  • Write basic sync test (empty/empty)

I managed to run the latest anki desktop in a debian:stable-slim container using the xvfb virtual framebuffer.
Anki is thus running but one cannot access the GUI directly (it would not be useful for testing anyway).
If you have any idea on how to remote control it (using anki repl, qt testing, whatever...), I'm all hears!


Under AGPL3, POC Containerfile for testing:

FROM rust:latest as builder
WORKDIR /usr/src/anki-sync-server-rs
COPY . .
RUN cargo install --path .

FROM debian:stable as test
# Install tool needed to install deps & anki
RUN apt-get update && apt-get install -y aptitude wget bzip2 xvfb && rm -rf /var/lib/apt/lists/*
# Install anki official debian package deps
# See https://askubuntu.com/questions/74478/how-to-install-only-the-dependencies-of-a-package
RUN apt-get update && aptitude search '!~i?reverse-depends("^anki$")' -F "%p" | xargs apt-get install -y && rm -rf /var/lib/apt/lists/*
# Set C.UTF8 local for anki to start
RUN echo 'export LC_CTYPE=C.UTF-8' >> /etc/profile
# Help qt to find xvfb screen
RUN echo 'export QT_QPA_PLATFORM=minimal' >> /etc/profile





COPY --from=builder /usr/local/cargo/bin/ankisyncd /usr/local/bin/ankisyncd
COPY ./tests /usr/tests
CMD ["/usr/tests/run_all"]

Potential high IO usage by sqlite

On every remote user action, one sqlite database or more is opened and then closed (as we call close() on the rustqlite::Connection).
This will lead to higher resource usage than needed when handing traffic as this will create many syscalls to the underlying OS (syscalls which are inherently slow).

We could in fact keep open one connection for each database before launching the web server and only use these open connections when doing actual work, only closing them when server shutdown.

This can be implemented manually or we can use a connection pool such as r2d2 to do the trick.
I think using a ready made crate may be less error prone, but at the same time I also try to be wary of pulling in new dependencies as they increase the burden on maintainers. I'm still out on this design choice there.

Incorrect log time

this is caused by crate env_logger,which use UTC instead of local time.
So we have to implement local time in env_logger to get correct time

I hope to support dynamic deletion of users and dynamic modification of passwords./希望支持动态删除用户、动态修改密码

After executing the command to delete a user:

./ankisyncd user --del username

The Anki client can still log in and sync, indicating that the user has not been dynamically deleted.

Similarly,

./ankisyncd user --pass username new_pass

The Anki client can still log in using the old password, indicating that the user password has not been dynamically changed.

===========

执行了下面的命令删除用户之后

./ankisyncd user --del username

anki 客户端仍然可以登陆和同步,说明用户没有被动态删除;
类似的

./ankisyncd user --pass username new_pass

anki 客户端还是可以就用的密码登陆,说明用户密码没有被动态修改;

Use in memory database to manage sessions

Sessions do not need to survive server restart.
As such there is no need to keep them on disk (it's actually bad for both security and performance).

We should investigate the use of an in memory database for sessions.

Maybe something as simple as a rust HashMap would be sufficient, maybe we need something more powerful like sled.

What are your thoughts on the subject @dobefore ?

iPhone crashes after full sync enabled

I tried to enable full sync several times with my Mac (Anki version 2.1.65) and iPhone (Ankimobile version 2.0.94), and the mobile app crashed every time after forcing a full download to my phone, but it will work if I force a full download to my computer. Not sure what causes this.

Decide on error handling practices

The current number of unwrap in the code is huge (cf. code block at the end), it makes the server prone to failures, we must fix that.
To lower this number there is a need to handle error properly (layered error handling).

The current best practices in rust are thiserror and anyhow.
Fatal AND non fatal errors should be logged (using error/warn/log/trace mechanism of the log crate), we may benefit migrating to a more complete logging backend (such as fern) down the line. For now I think env_logger should be enough.

I am a proponent of using thiserror as it integrates more tightly and allow control on the error type (which make code safety & maintenance much easier). In comparison anyhow make error return an opaque type making it easier to use but more error prone.
You have my take 😉

@dobefore What are your views on the error library to use ?


Unwrap count:

(src/) $ grep -R -c "unwrap()" 
get_card.sql:0
get_note.sql:0
get_review.sql:0
user.rs:37
main.rs:12
db.rs:1
sync.rs:82
envconfig.rs:14
schema.sql:0
media.rs:27
session.rs:28

Lack of documentation

Many functions have cryptic names like get_md_mf or last_usn for example.

It makes code maintenance hard as one struggles to find the function usage.
We should document what function does what and maybe explain the server architecture in a markdown file in docs/.

Could you help me on that @dobefore as you are the main contributor of the original server logic?

Implement true arg parsing

The current argument parsing of ankisyncd is very limited.
We should leverage an arg parsing crate such as clap in order to improve this.
It will allow for:

  • help messages (using -h/--help)
  • clean sub command for user management
  • Configuration file path/parameter overwrite

If we use clap we should directly use 3.0.0-beta.5 as the v3 is nearing stable, will be supported longer than v2 and is much easier to work with than v2.

MAXIMUM_UPLOAD_size

there is a limit on upload size,when upload size(uncompressed)>250m.

After I insect the anki source code a bit , I find that we can change their values by set two env vars in add-on code .

Two env vars are MAX_UPLOAD_MEGS_COMP and MAX_UPLOAD_MEGS_UNCOMP .

For example,we can set uncompressed size to 5g,and compressed size 1g.

os.environ["MAX_UPLOAD_MEGS_UNCOMP"] =str(1024*5)
    # 1 G
os.environ["MAX_UPLOAD_MEGS_COMP"] =str(1024*1)

Note: this only works with versions 2.1.28 and above.

tsudoko/anki-sync-server#69

addon site https://ankiweb.net/shared/info/358444159

add-on hosted on github dobefore/SyncRedirect21@3781ab8

Compile error (docker)

I'm trying to compile using the Containerfile and docker.
Version: 0.1.2

[...]
   Compiling awc v2.0.3
   Compiling actix-web v3.3.2
   Compiling actix-multipart v0.3.0
   Compiling ankisyncd v0.1.2 (/usr/src/anki-sync-server-rs)
error[E0432]: unresolved imports `clap::crate_description`, `clap::crate_name`, `clap::crate_version`
 --> src/parse.rs:1:12
  |
1 | use clap::{crate_description, crate_name, crate_version, App, AppSettings, Arg, ArgMatches};
  |            ^^^^^^^^^^^^^^^^^  ^^^^^^^^^^  ^^^^^^^^^^^^^ no `crate_version` in the root
  |            |                  |
  |            |                  no `crate_name` in the root
  |            no `crate_description` in the root

error: cannot determine resolution for the macro `crate_description`
   --> src/parse.rs:100:16
    |
100 |         .about(crate_description!())
    |                ^^^^^^^^^^^^^^^^^
    |
    = note: import resolution is stuck, try simplifying macro imports

error: cannot determine resolution for the macro `crate_version`
  --> src/parse.rs:99:18
   |
99 |         .version(crate_version!())
   |                  ^^^^^^^^^^^^^
   |
   = note: import resolution is stuck, try simplifying macro imports

error: cannot determine resolution for the macro `crate_name`
  --> src/parse.rs:98:14
   |
98 |     App::new(crate_name!())
   |              ^^^^^^^^^^
   |
   = note: import resolution is stuck, try simplifying macro imports

error[E0599]: no method named `about` found for struct `Arg` in the current scope
   --> src/parse.rs:106:18
    |
106 |                 .about("Sets a custom config file,ie -c ankisyncd.toml")
    |                  ^^^^^ method not found in `Arg<'_>`

error[E0599]: no method named `about` found for struct `Arg` in the current scope
   --> src/parse.rs:119:26
    |
119 |                         .about("create user account, i.e.-a user password")
    |                          ^^^^^ method not found in `Arg<'_>`

error[E0599]: no method named `about` found for struct `Arg` in the current scope
   --> src/parse.rs:129:26
    |
129 |                         .about("delete users,allow for multi-users, i.e.-d  user1 user2")
    |                          ^^^^^ method not found in `Arg<'_>`

error[E0599]: no method named `about` found for struct `Arg` in the current scope
   --> src/parse.rs:139:26
    |
139 |                         .about("change user's password, i.e.-p user newpassword")
    |                          ^^^^^ method not found in `Arg<'_>`

error[E0599]: no method named `about` found for struct `Arg` in the current scope
   --> src/parse.rs:147:26
    |
147 |                         .about("list all usernames extracted from db ,i.e. -l")
    |                          ^^^^^ method not found in `Arg<'_>`

Some errors have detailed explanations: E0432, E0599.
For more information about an error, try `rustc --explain E0432`.
error: failed to compile `ankisyncd v0.1.2 (/usr/src/anki-sync-server-rs)`, intermediate artifacts can be found at `/usr/src/anki-sync-server-rs/target`

Caused by:
  could not compile `ankisyncd` due to 9 previous errors
The command '/bin/sh -c cargo install --path .' returned a non-zero code: 101
ERROR: Service 'anki-rs' failed to build : Build failed

Authentication failed for nonexistent user

Using the latest docker image for ARM64. Using latest AnkiDroid as a client.

This is what I get when creating a user after docker exec-ing into the container as either

ankisyncd user -a testuser testpwd

or going to /usr/local/bin and then doing ./ankisyncd user -a testuser testpwd

image

What I expected:

...for it to actually log in.


Edit:

unsure if this happens because of the mismatch in default config location or not. (see rootdir)

image

However, when I try to change the config location, it throws another exception:

image


Edit2: I can confirm that after copy-pasting, logging in works as expected.

cp -r /usr/local/bin/* /app

Please either make documentation clearer, or fix the location bug.

when i migrate anki-sync-server to anki-sync-server-rs ,the anki client will get 500 http error

thanks for the rust version ,it's a pretty useful version for me.

I've found a small problem
when i migrated the server from python 'anki-sync-server' to this rust version,the first time when i click "sync ", the client will get http 500 error

Then i logout my previous account, and retry to log in ,then it goes well。and

it appears both in ankidroid or anki windows client,

I can solve it by logining again,but the better user experience i think is telling the user to login again (for example,show the logining dialog directly)

i've tried many version of anki-sync-server, your rust version is the only one that make me sync decks from pc to android successfully , thx for your great job!

Log authentication failure & rate limiting

If this service is accessible from the internet it becomes the prey of scanner and malicious actors.

As such logging authentication failure is necessary for most IDS (such as fail2ban) to detect and block attacks.

Rate limiting the auth end point would also be a good move.

Sync not reliable on desktop 2.1.60

Hi,

I am on an M1 macbook running Anki desktop ⁨2.1.60. I find that most of the time the sync times out, with it very occasionally working. I am using "https://myurl.tld/" with the container behind a reverse proxy. No problems on AnkiDroid using the */ and */msync URLs.

Logs from the server:

[2023-03-23T16:35:02+00:00 INFO  actix_web::middleware::logger] 10.42.0.243 "POST /sync/meta HTTP/1.1" 200 103 "-" "-" 0.006868
[2023-03-23T16:35:02+00:00 INFO  actix_web::middleware::logger] 10.42.0.243 "POST /sync/start HTTP/1.1" 200 43 "-" "-" 0.001131
[2023-03-23T16:36:03+00:00 INFO  anki::sync::http_server::user] aborting active sync

Logs from the client:

Error message:
Thu Mar 23 12:35:01 2023: Media sync starting...
Thu Mar 23 12:36:03 2023: Connection timed out. Please try again. If you see frequent timeouts, please try a different network connection.

Error details: ⁨⁩

Additional details: container running in K3s with Traefik.

Happy to get you guys more details if needed

invalid dnsname⁩

It works fine with ssl_enable=false. But if it switch to ssl_enable=true I get "invalid dnsname"
I'm using a self sign certificate:
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256 -days 3650 -nodes

Steps to reproduce:

Environment: Kubuntu 20.04

  1. Extracted anki
    ankisyncd-0.1.3-linux.tar.gz

  2. added cert.pm and key.pem and adapted Settings.toml


[address]
host="127.0.0.1"
port = "27701"
# use current executable path ,only set filename
[paths]
# set root_dir as working dir where server data(collections folder) and database(auth.db...) reside
root_dir="."
#following three lines are unnessesary and can be skipped
 data_root = ""
 auth_db_path = ""
 session_db_path = ""
        
# user will be added 
#into auth.db if not empty,and two fields must not be empty
[account]
username="Test"
password="Test123"
        
# embeded encrypted http /https credential if in Intranet
# true to enable ssl or false
[localcert]
ssl_enable=true
cert_file="cert.pem"
key_file="key.pem"

Started anki:
/ankisyncd
[2022-03-19T21:38:11Z INFO actix_server::builder] Starting 24 workers
[2022-03-19T21:38:11Z INFO actix_server::builder] Starting "actix-web-service-127.0.0.1:27701" service on 127.0.0.1:27701

  1. Tried to connect but got the following message:
    Please check your internet connection.
    Error details: ⁨error sending request for url (): error trying to connect: invalid dnsname⁩

image

image

image

Docker build fails at prost-build (armv7h linux)

I tried out the docker build following your manual and it failed the process of setting up the builder. By the way I used docker and not podman.
The build failed at :
Step 4/8 : RUN cargo build --release

Message:

error: failed to run custom build command for prost-build v0.7.0

Caused by:
process didn't exit successfully: /usr/src/anki-sync-server-rs/target/release/build/prost-build-041e4c2d603755cc/build-script-build (exit status: 101)
--- stderr
thread 'main' panicked at 'Failed to find the protoc binary. The PROTOC environment variable is not set, there is no bundled protoc for this platform, and protoc is not in the PATH', /usr/local/cargo/registry/src/github.com-1285ae84e5963aae/prost-build-0.7.0/build.rs:100:10
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...
error: build failed
The command '/bin/sh -c cargo build --release' returned a non-zero code: 101

Solution:

I just checked what people suggested in case of such an error message and it seems that you just have to tell the debian-based rust-builder container to install protobuf-compiler (my thanks to David Maze from https://stackoverflow.com/a/65539101).
I added the command:

RUN apt-get update
&& DEBIAN_FRONTEND=noninteractive
apt-get install --no-install-recommends --assume-yes
protobuf-compiler

to the Dockerfile right before RUN cargo build --release
This fixed the issue in my case and the build process finished successfully.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.