Comments (6)
I made some (very brief) notes when I set this up a while back. I'll just paste them below. The basic outline is to create the danbooru2_archive
database, configure both archives and danbooru to use it, then sign up for amazon and make an SQS queue, then configure both archives and danbooru to use that too.
The final step is running RUN=1 bundle exec ruby services/sqs_processor.rb --pidfile=tmp/sqs.pid --logfile=stdout
. This is the daemon that listens for post update messages from SQS and saves them to the database.
DB:
* git clone http://github.com/r888888888/archives
* bundle install
* cp .env-SAMPLE .env
* .env: configure POSTGRES_DB / POSTGRES_USER.
* configure same db for danbooru in danbooru/.env.local or danbooru/config/database.yml.
* bundle exec rake db:setup
* psql danbooru2_archive # test that database works
SQS:
* sign up for aws
* create new sqs queue
* create iam group and assign sqs privs
* create iam user and save access key / secret access key
* .env: configure access key / secret access key.
* foreman start
test SQS:
* sudo dnf install awscli # for fedora
* aws configure # input access key / secret key
* aws sqs list-queues
* aws sqs send-message --queue-url "$(aws sqs get-queue-url --queue-name devbooru)" --message-body "test message"
* aws sqs receive-message --queue-url "$(aws sqs get-queue-url --queue-name devbooru)"
* configure danbooru/config/danbooru_local_config.rb and danbooru/config/database.yml
# archives/.env config file:
AMAZON_SQS_REGION=us-east-1
AMAZON_KEY=redacted
AMAZON_SECRET=redacted
SQS_ARCHIVES_URL=https://sqs.us-east-1.amazonaws.com/redacted/devbooru
POSTGRES_DB=danbooru2_archive
POSTGRES_USER=danbooru
RAILS_ENV=development
# danbooru/.env.local config file:
# These settings take precedence over config/unicorn/unicorn.rb.
export UNICORN_ROOT=/home/danbooru/src/danbooru
export UNICORN_TIMEOUT=60
export UNICORN_LOG=/dev/stdout
export SECRET_TOKEN=redacted
export SESSION_SECRET_KEY=redacted
# These settings take precedence over config/danbooru_local_config.rb.
export DANBOORU_APP_NAME="Devbooru"
export DANBOORU_HOSTNAME="devbooru.evazion.ml"
export DANBOORU_SOURCE_CODE_URL="https://github.com/evazion/danbooru"
export DANBOORU_IQDBS_AUTH_KEY="redacted"
export DANBOORU_IQDBS_SERVER="http://127.0.0.1:4567"
export DANBOORU_AWS_SQS_IQDB_URL="https://sqs.us-east-1.amazonaws.com/redacted/iqdb"
export DANBOORU_AWS_SQS_ARCHIVES_URL="https://sqs.us-east-1.amazonaws.com/redacted/devbooru"
export DANBOORU_AWS_ACCESS_KEY_ID="redacted"
export DANBOORU_AWS_SECRET_ACCESS_KEY="redacted"
export DANBOORU_AWS_SQS_REGION="us-east-1"
export GOOGLE_API_JSON_KEY_PATH="$UNICORN_ROOT/.google-key.json"
# danbooru/.env.development config file:
export UNICORN_LISTEN=0.0.0.0:3000
export UNICORN_PROCESSES=1
export DATABASE_URL="postgresql://localhost/danbooru2?pool=5&timeout=5000"
export RO_DATABASE_URL="postgresql://localhost/danbooru2"
export ARCHIVE_DATABASE_URL="postgresql://localhost/danbooru2_archive"
The "test SQS" step is optional, that just shows you how to confirm that SQS is set up properly.
from archives.
Thanks, I think I got it! Amazing instructions.
from archives.
Hey, sorry again lol, but I figured I'd ask here instead of making a second issue elsewhere.
You wouldn't happen to have similar instructions for iqdbs would you?
from archives.
Sorry, but it's been awhile since I installed iqdbs and I forgot to write down the procedure once I got it working. The basic idea is similar though: run bundle install
, configure .env
, create and configure an SQS queue, then run the commands in Procfile
.
Also you'll have to compile and install iqdb
itself; refer to the README for that. Danbooru's iqdb fork is a little outdated and I had to patch a few things to get it to compile cleanly. There's a newer release from 2016 at https://iqdb.org/code/ that might work better (I haven't try it yet).
from archives.
Yeah I think I got it. Thanks a lot. the only issue I seem to have left is importing to it?
The script doesn't seem to work for me. Poking around leads me to believe that /script/fixes/029_iqdb_import.rb was prior to the creation of iqdbs? Correct me if I'm wrong
from archives.
It was. You'd have to script it yourself. I haven't done it myself but something like this might work:
#!/bin/sh
rails runner 'Post.all.pluck(:id, :md5).each { |id, md5| puts "#{id}:#{md5}.jpg" }' > files.txt
iqdb add iqdb.db < files.txt
from archives.
Related Issues (7)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from archives.