microsoft / aerial_wildlife_detection Goto Github PK
View Code? Open in Web Editor NEWTools for detecting wildlife in aerial images using active learning
License: MIT License
Tools for detecting wildlife in aerial images using active learning
License: MIT License
Any thoughts on the best approach to integrating a tensorflow model? specifically I mean how to save model weights and architectures to the database. Right now I've saved a filepath in the stateDict then I load and save weights to/from that file. It doesn't seem possible to save the model directly to the db as done for pytorch
It would be better to have the various software programs running in different containers and reuse existing Docker images for Postgres, redis, etc. The final Dockerfile would be shorter, and it would be easier to maintain, monitor and restart the components of the system.
https://docs.celeryproject.org/projects/kombu/en/stable/changelog.html#version-5-0-0
kombu==4.6.11
should be added to the requirements until the old Python 2 code is dropped.
Hi all,
I'd appreciate some assistance if possible configuring the Celery worker service.
I'm experiencing an issue where Celery is passing the project working directory as a variable into a pidfile checking method in multi.py, which is subsequently raising an error.
The full traceback is below:
Traceback (most recent call last):
File "/home/azureuser/anaconda3/envs/aide/bin/celery", line 8, in <module>
sys.exit(main())
File "/home/azureuser/anaconda3/envs/aide/lib/python3.8/site-packages/celery/__main__.py", line 15, in main
sys.exit(_main())
File "/home/azureuser/anaconda3/envs/aide/lib/python3.8/site-packages/celery/bin/celery.py", line 213, in main
return celery(auto_envvar_prefix="CELERY")
File "/home/azureuser/anaconda3/envs/aide/lib/python3.8/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/home/azureuser/anaconda3/envs/aide/lib/python3.8/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/home/azureuser/anaconda3/envs/aide/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/azureuser/anaconda3/envs/aide/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/azureuser/anaconda3/envs/aide/lib/python3.8/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/home/azureuser/anaconda3/envs/aide/lib/python3.8/site-packages/click/decorators.py", line 21, in new_func
return f(get_current_context(), *args, **kwargs)
File "/home/azureuser/anaconda3/envs/aide/lib/python3.8/site-packages/celery/bin/base.py", line 133, in caller
return f(ctx, *args, **kwargs)
File "/home/azureuser/anaconda3/envs/aide/lib/python3.8/site-packages/celery/bin/multi.py", line 480, in multi
return cmd.execute_from_commandline(args)
File "/home/azureuser/anaconda3/envs/aide/lib/python3.8/site-packages/celery/bin/multi.py", line 271, in execute_from_commandli>
return self.call_command(argv[0], argv[1:])
File "/home/azureuser/anaconda3/envs/aide/lib/python3.8/site-packages/celery/bin/multi.py", line 278, in call_command
return self.commands[command](*argv) or EX_OK
File "/home/azureuser/anaconda3/envs/aide/lib/python3.8/site-packages/celery/bin/multi.py", line 148, in _inner
return fun(self, *args, **kwargs)
File "/home/azureuser/anaconda3/envs/aide/lib/python3.8/site-packages/celery/bin/multi.py", line 166, in _inner
return fun(self, cluster, sig, **kwargs)
File "/home/azureuser/anaconda3/envs/aide/lib/python3.8/site-packages/celery/bin/multi.py", line 303, in stopwait
return cluster.stopwait(sig=sig, **kwargs)
File "/home/azureuser/anaconda3/envs/aide/lib/python3.8/site-packages/celery/apps/multi.py", line 448, in stopwait
return self._stop_nodes(retry=retry, on_down=callback, sig=sig)
File "/home/azureuser/anaconda3/envs/aide/lib/python3.8/site-packages/celery/apps/multi.py", line 452, in _stop_nodes
nodes = list(self.getpids(on_down=on_down))
File "/home/azureuser/anaconda3/envs/aide/lib/python3.8/site-packages/celery/apps/multi.py", line 494, in getpids
if node.pid:
File "/home/azureuser/anaconda3/envs/aide/lib/python3.8/site-packages/celery/apps/multi.py", line 260, in pid
return Pidfile(self.pidfile).read_pid()
File "/home/azureuser/anaconda3/envs/aide/lib/python3.8/site-packages/celery/platforms.py", line 168, in read_pid
with open(self.path) as fh:
IsADirectoryError: [Errno 21] Is a directory: '/home/azureuser/aerial_wildlife_detection'
Our environment file /etc/default/celeryd_aide is as follows:
CELERYD_NODES="aide@%h"
CELERY_BIN="/home/azureuser/anaconda3/envs/aide/bin/celery"
CELERY_APP="celery_worker"
CELERYD_CHDIR="/home/azureuser/aerial_wildlife_detection"
CELERYD_USER="aide_celery"
CELERYD_GROUP="aide"
CELERYD_LOG_LEVEL="INFO"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/celeryd_aide.log"
CELERYBEAT_PID_FILE="/tmp/celeryd_aide_beat.pid"
CELERYBEAT_PID_FILE="/var/run/celery/beat.pid"
CELERYBEAT_LOG_FILE="/var/log/celery/celeryd_aide_beat.log"
CELERYD_OPTS=""
CELERY_CREATE_DIRS=1
CELERYBEAT_CHDIR="/home/azureuser/aerial_wildlife_detection"
CELERYBEAT_OPTS="-s /tmp"
# AIDE environment variables
AIDE_MODULES=LabelUI,AIController,AIWorker,FileServer
PYTHONPATH=/home/azureuser/aerial_wildlife_detection
The systemd service file is as follows:
[Unit]
Description=Celery Service for AIDE AIWorker
After=network.target
After=rabbitmq-server.service
After=redis.service
After=postgresql.service
[Service]
Type=forking
User=aide_celery
Group=aide
EnvironmentFile=/etc/default/celeryd_aide
WorkingDirectory=/home/azureuser/aerial_wildlife_detection
ExecStart=/bin/sh -c '${CELERY_BIN} -A $CELERY_APP multi start $CELERYD_NODES --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel="${CELERYD_LOG_LEVEL}" $CELERYD_OPTS'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait $CELERYD_NODES --pidfile= --loglevel="${CELERYD_LOG_LEVEL}"'
ExecReload=/bin/sh -c '${CELERY_BIN} -A $CELERY_APP multi restart $CELERYD_NODES --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel="${CELERYD_LOG_LEVEL}" $CELERYD_OPTS'
Environment=AIDE_CONFIG_PATH=/home/azureuser/aerial_wildlife_detection/config/settings.ini
Environment=AIDE_MODULES=LabelUI,AIController,AIWorker,FileServer
Environment=PYTHONPATH=/home/azureuser/aerial_wildlife_detection
Restart=always
[Install]
WantedBy=multi-user.target
I've done some searching for this issue to see if it is a bug with Celery itself but haven't found anything decisive.
My main experience with Celery is through Django and I often have very simple Celery app definitions.
My OS is Ubuntu 20.04.3 and I'm using commit 087aa40.
Package versions:
absl-py==0.15.0
amqp==5.0.6
antlr4-python3-runtime==4.8
appdirs==1.4.4
bcrypt==3.2.0
billiard==3.6.4.0
black==21.4b2
bottle==0.12.19
cachetools==4.2.4
celery==5.1.2
certifi==2021.10.8
cffi==1.15.0
charset-normalizer==2.0.7
click==7.1.2
click-didyoumean==0.3.0
click-plugins==1.1.1
click-repl==0.2.0
cloudpickle==2.0.0
cryptography==35.0.0
cycler==0.11.0
Cython==0.29.24
detectron2==0.6+cu111
future==0.18.2
fvcore==0.1.5.post20211023
google-auth==2.3.3
google-auth-oauthlib==0.4.6
grpcio==1.41.1
gunicorn==20.1.0
hydra-core==1.1.1
idna==3.3
importlib-resources==5.4.0
iopath==0.1.9
kiwisolver==1.3.2
kombu==5.2.0
Markdown==3.3.4
matplotlib==3.4.3
msgpack==1.0.2
mypy-extensions==0.4.3
netifaces==0.11.0
numpy==1.21.4
oauthlib==3.1.1
omegaconf==2.1.1
opencv-python==4.5.4.58
pathspec==0.9.0
Pillow==8.4.0
portalocker==2.3.2
prompt-toolkit==3.0.22
protobuf==3.19.1
psycopg2-binary==2.9.1
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycocotools==2.0.2
pycparser==2.20
pydot==1.4.2
pyparsing==3.0.4
python-dateutil==2.8.2
pytz==2021.3
PyYAML==6.0
redis==3.5.3
regex==2021.11.2
requests==2.26.0
requests-oauthlib==1.3.0
rsa==4.7.2
six==1.16.0
tabulate==0.8.9
tensorboard==2.7.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.0
termcolor==1.1.0
toml==0.10.2
torch==1.9.0+cu111
torchvision==0.10.0+cu111
tqdm==4.62.3
typing-extensions==3.10.0.2
urllib3==1.26.7
vine==5.0.0
wcwidth==0.2.5
Werkzeug==2.0.2
yacs==0.1.8
zipp==3.6.0
Any pointers would be much appreciated
This looks like a great tool! Well done.
We'd be interested in setting up an instance for Tree Detection (also a AI4earth project). Couple quick questions as I scan the repo.
Does the workflow track users? Can be useful for scoring annotator quality.
I like the integrated model training, looks attractive. In general, how should I think about this tool versus Zooniverse? Where to use one versus another. They've got a nice front end builder, but this has better backend active learning? Is that the general idea?
@agentmorris if you've got feedback too.
Hi there,
I've run the dockerised installation and I hit this issue whenever I try to import a model.
Otherwise everything works fine. Does anyone have any ideas about what is going on here? Looking over the output from the docker install (and the makefile), it seems like it installs CUDA 11.0? But the models included with AIDE require CUDA 10.0?
The full error I am getting is:
File "/home/aide/app/modules/ModelMarketplace/backend/marketplaceWorker.py", line 354, in _import_model_state_file
modelClass = get_class_executable(modelLibrary)
File "/home/aide/app/util/helpers.py", line 100, in get_class_executable
execFile = importlib.import_module(classPath)
File "/opt/conda/lib/python3.8/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1014, in _gcd_import
File "", line 991, in _find_and_load
File "", line 975, in _find_and_load_unlocked
File "", line 671, in _load_unlocked
File "", line 783, in exec_module
File "", line 219, in _call_with_frames_removed
File "/home/aide/app/ai/models/detectron2/init.py", line 6, in
from .labels.torchvisionClassifier.torchvisionClassifier import GeneralizedTorchvisionClassifier
File "/home/aide/app/ai/models/detectron2/labels/torchvisionClassifier/init.py", line 6, in
from . import meta
File "/home/aide/app/ai/models/detectron2/labels/torchvisionClassifier/meta.py", line 7, in
from detectron2.data import MetadataCatalog
File "/opt/conda/lib/python3.8/site-packages/detectron2/data/init.py", line 4, in
from .build import (
File "/opt/conda/lib/python3.8/site-packages/detectron2/data/build.py", line 12, in
from detectron2.structures import BoxMode
File "/opt/conda/lib/python3.8/site-packages/detectron2/structures/init.py", line 7, in
from .masks import BitMasks, PolygonMasks, polygons_to_bitmask
File "/opt/conda/lib/python3.8/site-packages/detectron2/structures/masks.py", line 9, in
from detectron2.layers.roi_align import ROIAlign
File "/opt/conda/lib/python3.8/site-packages/detectron2/layers/init.py", line 3, in
from .deform_conv import DeformConv, ModulatedDeformConv
File "/opt/conda/lib/python3.8/site-packages/detectron2/layers/deform_conv.py", line 11, in
from detectron2 import _C
ImportError: libc10_cuda.so: cannot open shared object file: No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/site-packages/celery/app/trace.py", line 450, in trace_task
R = retval = fun(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/celery/app/trace.py", line 731, in protected_call
return self.run(*args, **kwargs)
File "/home/aide/app/modules/ModelMarketplace/backend/celery_interface.py", line 37, in import_model_uri
return worker.importModelURI(project, username, modelURI, public, anonymous, forceReimport, namePolicy, customName)
File "/home/aide/app/modules/ModelMarketplace/backend/marketplaceWorker.py", line 491, in importModelURI
return self.importModelFile(project, username, modelState, modelURI, public, anonymous, namePolicy, customName)
File "/home/aide/app/modules/ModelMarketplace/backend/marketplaceWorker.py", line 535, in importModelFile
return self._import_model_state_file(project, fileName, modelState, stateDict, public, anonymous, namePolicy, customName)
File "/home/aide/app/modules/ModelMarketplace/backend/marketplaceWorker.py", line 375, in _import_model_state_file
raise Exception(f'Model from imported state could not be launched (message: "{str(e)}").')
Exception: Model from imported state could not be launched (message: "libc10_cuda.so: cannot open shared object file: No such file or directory").
File "/home/aide/app/modules/ModelMarketplace/backend/marketplaceWorker.py", line 375, in _import_model_state_file
raise Exception(f'Model from imported state could not be launched (message: "{str(e)}").')
Exception: Model from imported state could not be launched (message: "libc10_cuda.so: cannot open shared object file: No such file or directory").
Does anyone have any ideas about how to resolve this error? I tried installing CUDA10 directly into the docker container, but I hit driver version mismatch errors.
Many thanks,
Matthew
Hi,
Thank you for creating this annotation software; I think it will be really helpful for our project. I'm wondering if it is possible to view infrared (IR) and near-infrared (NIR), and potentially their ratio, bands in the software. We have an aerial image dataset where some objects are only visible to a human by looking at the ratio of the NIR to IR. Our dataset has 5 channels: Red, Green, Blue, IR, and NIR. Is this something that is possible with AIDE? Or is it a feature that you would consider adding? I didn't see this capability in the demo, but I'm sorry if it is already available and I missed it.
Thank you,
Mikey
See attached log file but failed on Ubuntu 20.04 with:
Successfully built imantics deepforest fvcore antlr4-python3-runtime
Failed to build detectron2
ERROR: Could not build wheels for detectron2, which is required to install pyproject.toml-based projects
At the moment, every task (model training, inference, etc.) is bound to the very AIController instance that submitted it. Once a new AIController gets started (e.g. through multi-threaded web servers), the exact status updates of still ongoing tasks are never received by those.
Potential solution: check if Jobtastic could solve the issue.
aerial_wildlife_detection/projectCreation/import_images.py
Lines 80 to 82 in de150d0
aerial_wildlife_detection/projectCreation/import_images.py
Lines 90 to 94 in de150d0
aerial_wildlife_detection/application.py
Lines 69 to 73 in de150d0
There are various examples of this in the source code.
Hi, I set the "numImages_autotrain" to a small number(i.e., 5) to test the autoTrain function. My system is Ubuntu 16.04 and all the aide modules run on a single machine with one AIworker for detection task. But the autoTrain only ran once and never restart even new annotations were completed. It showed the trainning completed and task completed.
Then I mannually started trainning process and it worked a few times but would get stuck if I restart the process (annotaion and then training) again. The status would be kept "PENDING" not "SUCCESS"
Hi Again! I managed to install AIDE , It's looking great but I'm running into an issue.
I uploaded about 200 images to test the auto training feature and started labelling with bounding boxes.
I run into the following issue:
[2022-09-21 13:38:36,671: WARNING/ForkPoolWorker-29] Assembled training images into 1 chunks (length of first: 128)
[2022-09-21 13:38:37,390: WARNING/ForkPoolWorker-28] [TP] Updating model to incorporate potentially new label classes...
[2022-09-21 13:38:37,402: WARNING/ForkPoolWorker-28] [TP] Model auto-update disabled; skipping...
[2022-09-21 13:38:38,222: WARNING/ForkPoolWorker-31] [TP] Epoch 1: Initiated training...
[2022-09-21 13:38:41,804: WARNING/ForkPoolWorker-31] WARNING: encountered unknown label classes: e75a562b-39ce-11ed-a5c4-d7f10ba73e16, e75a562a-39ce-11ed-a5c4-d7f10ba73e16, 3a0ec21b-39d0-11ed-a5c4-d7f10ba73e16, e75a562c-39ce-11ed-a5c4-d7f10ba73e16, e75a562d-39ce-11ed-a5c4-d7f10ba73e16
[2022-09-21 13:38:41,804: WARNING/ForkPoolWorker-31] need at least one array to concatenate
[2022-09-21 13:38:41,807: ERROR/ForkPoolWorker-31] Task AIWorker.call_train[bf756515-ecf8-4258-908e-981e3dd892f7] raised unexpected: Exception('[Epoch 1] error during training (reason: need at least one array to concatenate)')
Traceback (most recent call last):
File "/home/vlucet/Documents/WILDLab/repos/AIDE/aerial_wildlife_detection/modules/AIWorker/backend/worker/functional.py", line 292, in _call_train
result = modelInstance.train(stateDict=stateDict, data=data, updateStateFun=update_state)
File "/home/vlucet/Documents/WILDLab/repos/AIDE/aerial_wildlife_detection/ai/models/detectron2/genericDetectronModel.py", line 447, in train
dataLoader = build_detection_train_loader(
File "/home/vlucet/Documents/WILDLab/repos/AIDE/AIDEenv/lib/python3.8/site-packages/detectron2/config/config.py", line 210, in wrapped
return orig_func(*args, **kwargs)
File "/home/vlucet/Documents/WILDLab/repos/AIDE/AIDEenv/lib/python3.8/site-packages/detectron2/data/build.py", line 422, in build_detection_train_loader
dataset = DatasetFromList(dataset, copy=False)
File "/home/vlucet/Documents/WILDLab/repos/AIDE/AIDEenv/lib/python3.8/site-packages/detectron2/data/common.py", line 143, in __init__
self._lst = np.concatenate(self._lst)
File "<__array_function__ internals>", line 180, in concatenate
ValueError: need at least one array to concatenate
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/vlucet/Documents/WILDLab/repos/AIDE/AIDEenv/lib/python3.8/site-packages/celery/app/trace.py", line 451, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/vlucet/Documents/WILDLab/repos/AIDE/AIDEenv/lib/python3.8/site-packages/celery/app/trace.py", line 734, in __protected_call__
return self.run(*args, **kwargs)
File "/home/vlucet/Documents/WILDLab/repos/AIDE/aerial_wildlife_detection/modules/AIWorker/backend/celery_interface.py", line 41, in call_train
return worker.call_train(data[index], epoch, numEpochs, project, is_subset, aiModelSettings)
File "/home/vlucet/Documents/WILDLab/repos/AIDE/aerial_wildlife_detection/modules/AIWorker/app.py", line 216, in call_train
return functional._call_train(project, data, epoch, numEpochs, subset, modelInstance, modelLibrary,
File "/home/vlucet/Documents/WILDLab/repos/AIDE/aerial_wildlife_detection/modules/AIWorker/backend/worker/functional.py", line 295, in _call_train
raise Exception(f'[Epoch {epoch}] error during training (reason: {str(e)})')
Exception: [Epoch 1] error during training (reason: need at least one array to concatenate)
Attempting installation on new Ubuntu 18.04 image; installed docker and docker-compose, but failed to build.
ERROR: Version in "./docker-compose.yml" is unsupported. You might be seeing this error because you're using the wrong Compose file version. Either specify a supported version (e.g "2.2" or "3.3") and place your service definitions under the `services` key, or omit the `version` key and place your service definitions at the root of the file to use version 1.
For more on the Compose file format versions, see https://docs.docker.com/compose/compose-file/
Install of v3 on Macos 14.3 fails:
Stored in directory: /Users/simbamangu/Library/Caches/pip/wheels/5b/eb/43/7295e71293b218ddfd627f935229bf54af9018add7fbb5aac6
Building wheel for imagecodecs (setup.py): started
Building wheel for imagecodecs (setup.py): finished with status 'error'
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [91 lines of output]
/opt/homebrew/lib/python3.8/site-packages/setuptools/__init__.py:80: _DeprecatedInstaller: setuptools.installer and fetch_build_eggs are deprecated.
!!
********************************************************************************
Requirements should be satisfied by a PEP 517 installer.
If you are using pip, you can try `pip install --use-pep517`.
********************************************************************************
!!
dist.fetch_build_eggs(dist.setup_requires)
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-14-arm64-cpython-38
creating build/lib.macosx-14-arm64-cpython-38/imagecodecs
copying imagecodecs/numcodecs.py -> build/lib.macosx-14-arm64-cpython-38/imagecodecs
copying imagecodecs/__init__.py -> build/lib.macosx-14-arm64-cpython-38/imagecodecs
copying imagecodecs/_imagecodecs.py -> build/lib.macosx-14-arm64-cpython-38/imagecodecs
copying imagecodecs/imagecodecs.py -> build/lib.macosx-14-arm64-cpython-38/imagecodecs
copying imagecodecs/__main__.py -> build/lib.macosx-14-arm64-cpython-38/imagecodecs
copying imagecodecs/__init__.pyi -> build/lib.macosx-14-arm64-cpython-38/imagecodecs
creating build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-zfp -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-fastlz -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-postgresql -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-libjpeg -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-lzfse -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-liblzma -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-libmng -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-lzham -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-jpg_0xc3 -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-bzip2 -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-blosc2 -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-libtiff -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-svt-av1 -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-zlib -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-qoi -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-openjpeg -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-libjpeg-turbo -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-libaivf -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-aom -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-snappy -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-lerc -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-lz4 -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-brunsli -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-dav1d -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-giflib -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-highway -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-rav1e -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-libdeflate -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-zopfli -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-cfitsio -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-zstd -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-libjxl -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-bitshuffle -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-liblj92 -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-charls -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-brotli -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-jetraw -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-libwebp -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-libaec -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/PATENTS-rav1e -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-zlib-ng -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-libpng -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-libspng -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-jpeg -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-lcms2 -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-blosc -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-lzf -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-jxrlib -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
copying imagecodecs/licenses/LICENSE-mozjpeg -> build/lib.macosx-14-arm64-cpython-38/imagecodecs/licenses
running build_ext
Compiling imagecodecs/_aec.pyx because it changed.
[1/1] Cythonizing imagecodecs/_aec.pyx
building 'imagecodecs._aec' extension
creating build/temp.macosx-14-arm64-cpython-38
creating build/temp.macosx-14-arm64-cpython-38/imagecodecs
clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX14.sdk -I/opt/homebrew/opt/openssl/include -Iimagecodecs -I/opt/homebrew/opt/[email protected]/Frameworks/Python.framework/Versions/3.8/include/python3.8 -I/opt/homebrew/lib/python3.8/site-packages/numpy/core/include -c imagecodecs/_aec.c -o build/temp.macosx-14-arm64-cpython-38/imagecodecs/_aec.o
In file included from imagecodecs/_aec.c:1215:
In file included from /opt/homebrew/lib/python3.8/site-packages/numpy/core/include/numpy/arrayobject.h:5:
In file included from /opt/homebrew/lib/python3.8/site-packages/numpy/core/include/numpy/ndarrayobject.h:12:
In file included from /opt/homebrew/lib/python3.8/site-packages/numpy/core/include/numpy/ndarraytypes.h:1940:
/opt/homebrew/lib/python3.8/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:17:2: warning: "Using deprecated NumPy API, disable it with " "#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-W#warnings]
#warning "Using deprecated NumPy API, disable it with " \
^
imagecodecs/_aec.c:1222:10: fatal error: 'libaec.h' file not found
#include "libaec.h"
^~~~~~~~~~
1 warning and 1 error generated.
error: command '/usr/bin/clang' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for imagecodecs
Running setup.py clean for imagecodecs
Building wheel for cffi (pyproject.toml): started
Building wheel for cffi (pyproject.toml): finished with status 'done'
Created wheel for cffi: filename=cffi-1.16.0-cp38-cp38-macosx_14_0_arm64.whl size=176978 sha256=fdfac8c99413cfeabaf4a189e0b1b23288ddc20150607ba194143b848d6cf0b6
Stored in directory: /Users/simbamangu/Library/Caches/pip/wheels/f4/df/d7/20c740c0373c550cdca4fcf0eb9af36c769ad8553ea81c6a2f
Successfully built netifaces imantics detectron2 deepforest backports.zoneinfo fvcore antlr4-python3-runtime PyYAML fire cffi
Failed to build imagecodecs
ERROR: Could not build wheels for imagecodecs, which is required to install pyproject.toml-based projects
Hi Beni,
Another one from me. If I have model predictions for an image, I click clear all and the predictions disappear as expected. If I then hit next and previous I see the model predictions back again. The same behaviour if I label all predictions as something else - if I go next then previous I see the annotations and if I move them I see the old predictions underneath.
Is this expected behaviour? In the old version once an image was annotated if you looked at it again you would only see user annotations and not previous model predictions.
Cheers
Database schema is required by this script:
Missing from:
aerial_wildlife_detection/config/settings.ini
Lines 127 to 140 in de150d0
I would also suggest raising an error if the parameter is mandatory but missing.
schema
should be equal to the user name.
Hello!
We're having some trouble with some model prediction workflows getting stuck in the Running Workflow section rather than moving into the Finished section. We've used it to successfully for a few image prediction data sets a few months ago and this seems to be a new issue as of a couple weeks ago. Since the workflow designer page is still labelled as a work in progress, is it best to just hold off on model predictions until its finished or do you know if something else could be causing an issue?
Thank you!!
Hi Beni
I'm getting an error when running the setupDB script
Seems like this change 97ee9fc makes the new account creation here:
aerial_wildlife_detection/setup/setupDB.py
Lines 61 to 69 in c3fade9
https://github.com/microsoft/VoTT/issues
why don't you look at integrating this labelling system with MS VOTT (Link above).
Hi,
Can I use AIDE to label a new data set? How to use?For example, pascal voc2012.
Hi
I'm having issues trying to upload images into a new project. I have the fileserver running on the same machine as the labelUI. When uploading images and clicking scan untracked sometimes the images show up in the listview, but sometimes it hangs with the 'loading...' message.
When it has loaded the images Add all didn't work and I could only upload 100 images at a time. This error was shown on the console a few times but not consistently:
Traceback (most recent call last):
File "/anaconda/envs/py37_tensorflow/lib/python3.7/site-packages/celery/app/trace.py", line 412, in trace_task
R = retval = fun(*args, **kwargs)
File "/anaconda/envs/py37_tensorflow/lib/python3.7/site-packages/celery/app/trace.py", line 704, in __protected_call__
return self.run(*args, **kwargs)
File "/home/ctorney/workspace/aerial_wildlife_detection/modules/DataAdministration/backend/celery_interface.py", line 52, in addExistingImages
return worker.addExistingImages(project, imageList)
File "/home/ctorney/workspace/aerial_wildlife_detection/modules/DataAdministration/backend/dataWorker.py", line 436, in addExistingImages
imgs_add = list(set(imgs_candidates).intersection(set(imageList)))
TypeError: unhashable type: 'dict'
Thanks
If using AIde as an annotator only (i.e. no AI backend) how are the classes specified? I can see there is a labelclass table but can't see where the classes are defined (except from the AI models)
Hi,
After installing, when I try to run
~/aerial_wildlife_detection$ sudo docker/docker_run_cpu.sh
I get the following error messages:
: "aide_images\r" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path
: "aide_db_data\r" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path
: not founder_run_cpu.sh: 3:
docker: invalid reference format.
See 'docker run --help'.
docker/docker_run_cpu.sh: 5: --rm: not found
docker/docker_run_cpu.sh: 6: -v: not found
docker/docker_run_cpu.sh: 7: -v: not found
docker/docker_run_cpu.sh: 8: -v: not found
docker/docker_run_cpu.sh: 9: -p: not found
docker/docker_run_cpu.sh: 10: -h: not found
: not founder_run_cpu.sh: 11: aide_app
: not founder_run_cpu.sh: 12:
Not sure why I get the invalid character error. Most probably something went wrong during installation. I run on
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.2 LTS
Release: 20.04
Codename: focal
During installation I got this error message:
~/aerial_wildlife_detection$ cd docker
~/aerial_wildlife_detection/docker$ sudo docker-compose build
...
Get:1 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libjemalloc1 amd64 3.6.0-11 [82.4 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 redis-tools amd64 5:4.0.9-1ubuntu0.2 [516 kB]
Get:3 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 redis-server amd64 5:4.0.9-1ubuntu0.2 [35.4 kB]
debconf: unable to initialize frontend: Dialog
debconf: (Dialog frontend will not work on a dumb terminal, an emacs shell buffer, or without a controlling terminal.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (This frontend requires a controlling tty.)
debconf: falling back to frontend: Teletype
dpkg-preconfigure: unable to re-open stdin:
Fetched 634 kB in 0s (2039 kB/s)
Selecting previously unselected package libjemalloc1.
(Reading database ... 41473 files and directories currently installed.)
Preparing to unpack .../libjemalloc1_3.6.0-11_amd64.deb ...
Unpacking libjemalloc1 (3.6.0-11) ...
Selecting previously unselected package redis-tools.
Preparing to unpack .../redis-tools_5%3a4.0.9-1ubuntu0.2_amd64.deb ...
Unpacking redis-tools (5:4.0.9-1ubuntu0.2) ...
Selecting previously unselected package redis-server.
Preparing to unpack .../redis-server_5%3a4.0.9-1ubuntu0.2_amd64.deb ...
Unpacking redis-server (5:4.0.9-1ubuntu0.2) ...
Setting up libjemalloc1 (3.6.0-11) ...
Setting up redis-tools (5:4.0.9-1ubuntu0.2) ...
Setting up redis-server (5:4.0.9-1ubuntu0.2) ...
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of start.
Created symlink /etc/systemd/system/redis.service → /lib/systemd/system/redis-server.service.
Created symlink /etc/systemd/system/multi-user.target.wants/redis-server.service → /lib/systemd/system/redis-server.service.
Processing triggers for systemd (237-3ubuntu10.47) ...
Processing triggers for libc-bin (2.27-3ubuntu1.2) ...
Removing intermediate container 32104a348723
When I try to execute with Docker Compose, I get the followings:
~/aerial_wildlife_detection$ cd docker
~/aerial_wildlife_detection/docker$ sudo docker-compose up
Creating network "docker_default" with the default driver
Creating volume "aide_db_data" with default driver
Creating volume "aide_images" with default driver
Creating docker_aide_app_1 ... done
Attaching to docker_aide_app_1
aide_app_1 | /home/aide/app/docker/container_init.sh: line 2: $'\r': command not found
aide_app_1 | Failed to enable unit, unit redis-server.service\x0d.service does not exist.
aide_app_1 | Starting redis-server: redis-server.
aide_app_1 | /home/aide/app/docker/container_init.sh: line 5: $'\r': command not found
aide_app_1 | =============================
aide_app_1 | Setup of database IS STARTING
aide_app_1 | =============================
aide_app_1 | /home/aide/app/docker/container_init.sh: line 9: $'\r': command not found
aide_app_1 | Usage: /etc/init.d/postgresql {start|stop|restart|reload|force-reload|status} [version ..]
aide_app_1 | /home/aide/app/docker/container_init.sh: line 13: $'\r': command not found
aide_app_1 | psql: could not connect to server: No such file or directory
aide_app_1 | Is the server running locally and accepting
aide_app_1 | connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.17685"?
aide_app_1 | psql: could not connect to server: No such file or directory
aide_app_1 | Is the server running locally and accepting
aide_app_1 | connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.17685"?
aide_app_1 | psql: could not connect to server: No such file or directory
aide_app_1 | Is the server running locally and accepting
aide_app_1 | connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.17685"?
aide_app_1 | psql: could not connect to server: No such file or directory
aide_app_1 | Is the server running locally and accepting
aide_app_1 | connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.17685"?
aide_app_1 | psql: could not connect to server: No such file or directory
aide_app_1 | Is the server running locally and accepting
aide_app_1 | connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.17685"?
aide_app_1 | psql: could not connect to server: No such file or directory
aide_app_1 | Is the server running locally and accepting
aide_app_1 | connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.17685"?
aide_app_1 | psql: could not connect to server: No such file or directory
aide_app_1 | Is the server running locally and accepting
aide_app_1 | connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.17685"?
aide_app_1 | /home/aide/app/docker/container_init.sh: line 19: $'\r': command not found
': [Errno 2] No such file or directory'setup/setupDB.py
aide_app_1 | Failed to enable unit, unit postgresql.service\x0d.service does not exist.
aide_app_1 | Usage: /etc/init.d/postgresql {start|stop|restart|reload|force-reload|status} [version ..]
aide_app_1 | /home/aide/app/docker/container_init.sh: line 24: $'\r': command not found
aide_app_1 | ==============================
aide_app_1 | Setup of database IS COMPLETED
aide_app_1 | ==============================
aide_app_1 |
aide_app_1 | /home/aide/app/docker/container_init.sh: line 29: $'\r': command not found
aide_app_1 | ==========================
aide_app_1 | RABBITMQ SETUP IS STARTING
aide_app_1 | ==========================
aide_app_1 | Usage: /etc/init.d/rabbitmq-server {start|stop|status|rotate-logs|restart|condrestart|try-restart|reload|force-reload}
aide_app_1 | Error: unable to connect to node rabbit@aide_app_host: nodedown
aide_app_1 |
aide_app_1 | DIAGNOSTICS
aide_app_1 | ===========
aide_app_1 |
aide_app_1 | attempted to contact: [rabbit@aide_app_host]
aide_app_1 |
aide_app_1 | rabbit@aide_app_host:
aide_app_1 | * connected to epmd (port 4369) on aide_app_host
aide_app_1 | * epmd reports: node 'rabbit' not running at all
aide_app_1 | no other nodes on aide_app_host
aide_app_1 | * suggestion: start the node
aide_app_1 |
aide_app_1 | current node details:
aide_app_1 | - node name: 'rabbitmq-cli-56@aide_app_host'
aide_app_1 | - home dir: /var/lib/rabbitmq
aide_app_1 | - cookie hash: weZIL9jjvo57iPkMljrfcQ==
aide_app_1 |
aide_app_1 | Error: unable to connect to node rabbit@aide_app_host: nodedown
aide_app_1 |
aide_app_1 | DIAGNOSTICS
aide_app_1 | ===========
aide_app_1 |
aide_app_1 | attempted to contact: [rabbit@aide_app_host]
aide_app_1 |
aide_app_1 | rabbit@aide_app_host:
aide_app_1 | * connected to epmd (port 4369) on aide_app_host
aide_app_1 | * epmd reports: node 'rabbit' not running at all
aide_app_1 | no other nodes on aide_app_host
aide_app_1 | * suggestion: start the node
aide_app_1 |
aide_app_1 | current node details:
aide_app_1 | - node name: 'rabbitmq-cli-91@aide_app_host'
aide_app_1 | - home dir: /var/lib/rabbitmq
aide_app_1 | - cookie hash: weZIL9jjvo57iPkMljrfcQ==
aide_app_1 |
aide_app_1 | /home/aide/app/docker/container_init.sh: line 39: $'\r': command not found
aide_app_1 | Error: unable to connect to node rabbit@aide_app_host: nodedown
aide_app_1 |
aide_app_1 | DIAGNOSTICS
aide_app_1 | ===========
aide_app_1 |
aide_app_1 | attempted to contact: [rabbit@aide_app_host]
aide_app_1 |
aide_app_1 | rabbit@aide_app_host:
aide_app_1 | * connected to epmd (port 4369) on aide_app_host
aide_app_1 | * epmd reports: node 'rabbit' not running at all
aide_app_1 | no other nodes on aide_app_host
aide_app_1 | * suggestion: start the node
aide_app_1 |
aide_app_1 | current node details:
aide_app_1 | - node name: 'rabbitmq-cli-33@aide_app_host'
aide_app_1 | - home dir: /var/lib/rabbitmq
aide_app_1 | - cookie hash: weZIL9jjvo57iPkMljrfcQ==
aide_app_1 |
aide_app_1 | Error: unable to connect to node rabbit@aide_app_host: nodedown
aide_app_1 |
aide_app_1 | DIAGNOSTICS
aide_app_1 | ===========
aide_app_1 |
aide_app_1 | attempted to contact: [rabbit@aide_app_host]
aide_app_1 |
aide_app_1 | rabbit@aide_app_host:
aide_app_1 | * connected to epmd (port 4369) on aide_app_host
aide_app_1 | * epmd reports: node 'rabbit' not running at all
aide_app_1 | no other nodes on aide_app_host
aide_app_1 | * suggestion: start the node
aide_app_1 |
aide_app_1 | current node details:
aide_app_1 | - node name: 'rabbitmq-cli-32@aide_app_host'
aide_app_1 | - home dir: /var/lib/rabbitmq
aide_app_1 | - cookie hash: weZIL9jjvo57iPkMljrfcQ==
aide_app_1 |
aide_app_1 | /home/aide/app/docker/container_init.sh: line 42: $'\r': command not found
aide_app_1 | Error: unable to connect to node rabbit@aide_app_host: nodedown
aide_app_1 |
aide_app_1 | DIAGNOSTICS
aide_app_1 | ===========
aide_app_1 |
aide_app_1 | attempted to contact: [rabbit@aide_app_host]
aide_app_1 |
aide_app_1 | rabbit@aide_app_host:
aide_app_1 | * connected to epmd (port 4369) on aide_app_host
aide_app_1 | * epmd reports: node 'rabbit' not running at all
aide_app_1 | no other nodes on aide_app_host
aide_app_1 | * suggestion: start the node
aide_app_1 |
aide_app_1 | current node details:
aide_app_1 | - node name: 'rabbitmq-cli-66@aide_app_host'
aide_app_1 | - home dir: /var/lib/rabbitmq
aide_app_1 | - cookie hash: weZIL9jjvo57iPkMljrfcQ==
aide_app_1 |
aide_app_1 | Failed to enable unit, unit rabbitmq-server.service\x0d.service does not exist.
aide_app_1 | ===========================
aide_app_1 | RABBITMQ SETUP IS COMPLETED
aide_app_1 | ===========================
aide_app_1 |
aide_app_1 | /home/aide/app/docker/container_init.sh: line 64: syntax error: unexpected end of file
docker_aide_app_1 exited with code 2
I'm just trying to figure out the best way to transfer all of my work to another machine. I see how to export the model and the annotations, but the annotations are tied to our image tiles right? How do we get the image tiles? Any advice?
This code cannot handle an empty image table. Here is a quick fix:
imgs_existing = set([i['filename'] for i in imgs_existing or []])
I just run cd docker && docker-compose up
on master.
aide_app_1 | Synchronizing state of redis-server.service with SysV service script with /lib/systemd/systemd-sysv-install.
aide_app_1 | Executing: /lib/systemd/systemd-sysv-install enable redis-server
aide_app_1 | Starting redis-server: /etc/init.d/redis-server: 51: ulimit: error setting limit (Operation not permitted)
aide_app_1 | redis-server.
aide_app_1 | =============================
aide_app_1 | Setup of database IS STARTING
aide_app_1 | =============================
aide_app_1 | * Restarting PostgreSQL 10 database server
aide_app_1 | ...done.
aide_app_1 | CREATE ROLE
aide_app_1 | CREATE DATABASE
aide_app_1 | GRANT
aide_app_1 | CREATE EXTENSION
aide_app_1 | GRANT
aide_app_1 | Synchronizing state of postgresql.service with SysV service script with /lib/systemd/systemd-sysv-install.
aide_app_1 | Executing: /lib/systemd/systemd-sysv-install enable postgresql
aide_app_1 | * Starting PostgreSQL 10 database server
aide_app_1 | ...done.
aide_app_1 | ==============================
aide_app_1 | Setup of database IS COMPLETED
aide_app_1 | ==============================
aide_app_1 |
aide_app_1 | ==========================
aide_app_1 | RABBITMQ SETUP IS STARTING
aide_app_1 | ==========================
aide_app_1 | * Starting RabbitMQ Messaging Server rabbitmq-server
aide_app_1 | ...done.
aide_app_1 | Creating user "aide"
aide_app_1 | Creating vhost "aide_vhost"
aide_app_1 | Setting permissions for user "aide" in vhost "aide_vhost"
aide_app_1 | Synchronizing state of rabbitmq-server.service with SysV service script with /lib/systemd/systemd-sysv-install.
aide_app_1 | Executing: /lib/systemd/systemd-sysv-install enable rabbitmq-server
aide_app_1 | ===========================
aide_app_1 | RABBITMQ SETUP IS COMPLETED
aide_app_1 | ===========================
aide_app_1 |
aide_app_1 | sysctl: setting key "net.ipv4.tcp_keepalive_time": Read-only file system
aide_app_1 | sysctl: setting key "net.ipv4.tcp_keepalive_intvl": Read-only file system
aide_app_1 | sysctl: setting key "net.ipv4.tcp_keepalive_probes": Read-only file system
aide_app_1 |
aide_app_1 | -------------- aide@aide_app_host v5.1.0 (sun-harmonics)
aide_app_1 | --- ***** -----
aide_app_1 | -- ******* ---- Linux-4.15.0-143-generic-x86_64-with-glibc2.10 2021-05-27 11:53:51
aide_app_1 | - *** --- * ---
aide_app_1 | - ** ---------- [config]
aide_app_1 | - ** ---------- .> app: AIDE:0x7fdb3c06abe0
aide_app_1 | - ** ---------- .> transport: amqp://aide:**@localhost:5672/aide_vhost
aide_app_1 | - ** ---------- .> results: redis://localhost:6379/0
aide_app_1 | - *** --- * --- .> concurrency: 4 (prefork)
aide_app_1 | -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
aide_app_1 | --- ***** -----
aide_app_1 | -------------- [queues]
aide_app_1 | .> AIController exchange=celery(direct) key=celery
aide_app_1 | .> AIWorker exchange=celery(direct) key=celery
aide_app_1 | .> FileServer exchange=celery(direct) key=celery
aide_app_1 | .> ModelMarketplace exchange=celery(direct) key=celery
aide_app_1 | .> aide@aide_app_host exchange=celery(direct) key=celery
aide_app_1 | .> bcast.a900bb2d-be20-474a-b1a2-b345bed7ece5 exchange=aide_broadcast(fanout) key=celery
aide_app_1 |
aide_app_1 | AIDE.sh: line 10: 1647 Illegal instruction (core dumped) python setup/assemble_server.py --migrate_db 1
aide_app_1 | Pre-flight checks failed; aborting launch of AIDE.
docker_aide_app_1 exited with code 0
The software seems to crash here:
aerial_wildlife_detection/util/helpers.py
Line 100 in 08954b5
path
is set to ai.models.detectron2.AlexNet
.
Is it possible to stop training without using the UI? The workflows manager shows no training in progress, yet the output from the project on the terminal shows the model is still training. I would like to force AIDE to stop training the model, yet when I kill or stop the image (either via docker or inside the running image using e.g. ./AIDE.sh stop
the training process keeps restarting.
Do you know how this can be done?
Cheers,
Matthew
hi, intestesting work. I followed the instructions to try the aide on my ubuntu 16.04, conda environment. I ran all the aide services on a single machine. Everything was good, but it failed to load images in the labelUI. I found that the images already had been added to the database(by running "python projectCreation/import_YOLO_dataset.py" or "python projectCreation/import_images.py"). Any idea about the problem?Thanks very much
Hi Beni
I'm having some issues running the upgrade script.
Firstly, the config call here:
It does run but gives errors in the database updates:
insert or update on table "image_user" violates foreign key constraint "image_user_image_fkey"
DETAIL: Key (username)=(colin) is not present in table "user".
insert or update on table "annotation" violates foreign key constraint "annotation_username_fkey"
DETAIL: Key (username)=(colin) is not present in table "user".
null value in column "name" violates not-null constraint
DETAIL: Failing row contains (cameratraps, null, null, f, E2VweuhpF-RR_paP8M_1TOyc8TP335xRvqZYIFSU91c, t, null, null, null, {"enableEmptyClass": "no", "showPredictions": "yes", "showPredic..., null, null, null, null, f, null, null, null, null, null, t, admin, f, f).
insert or update on table "authentication" violates foreign key constraint "authentication_username_fkey"
DETAIL: Key (username)=(hugheyl) is not present in table "user".
CONTEXT: SQL statement "INSERT INTO aide_admin.authentication (username, project, isAdmin)
SELECT name, 'cameratraps' AS project, isAdmin FROM cameratraps.user"
PL/pgSQL function inline_code_block line 9 at SQL statement
then finally
insert or update on table "authentication" violates foreign key constraint "authentication_project_fkey"
DETAIL: Key (project)=(cameratraps) is not present in table "project".
CONTEXT: SQL statement "INSERT INTO aide_admin.authentication (username, project, isAdmin)
SELECT name, 'cameratraps', isAdmin FROM cameratraps.user
WHERE name IN (SELECT name FROM aide_admin.user)
ON CONFLICT (username, project) DO NOTHING"
PL/pgSQL function inline_code_block line 9 at SQL statement
Project "None" has been converted to AIDE v2 standards. Please do not use a v1 installation on this project anymore.
Thanks
Hi, really interested in this project. One feature request is addition of a docker compose so we can get this up and running very quickly. Happy to assist if of interest.
Cheers
Robin
when i try to upload JPG and TXT ( yolo format). .Jpg seems to charge but without available view and charging bar stopped at the half. seperatly JPG works and when i try to overwrite, jpg charge wiht success but the message for the .txt say that file already existe and his name has been change to next number .... can you help ?
Hi Beni,
should this line be including window.baseURL?
I get an error when the baseURL is included but seems to work without it
Cheers
This is what I see running aide with Docker:
aide_app_1 | sysctl: setting key "net.ipv4.tcp_keepalive_time": Read-only file system
aide_app_1 | sysctl: setting key "net.ipv4.tcp_keepalive_intvl": Read-only file system
aide_app_1 | sysctl: setting key "net.ipv4.tcp_keepalive_probes": Read-only file system
Hi Ben,
I have got about 6.5k labelled patches, of which only about 500 have annotations. The rest are empty. When these annotations are exported using the utility script export_YOLO_dataset.py
, only the images that have annotations are created as annotation files (i.e. so 500 txt files are created). What would be really helpful is if the project also exported the other 6k null annotations as empty txt files.
Unfortunately, since I have 18k image patches in my project on AIDE, I can not just use the difference between the img file names and the txt file names to solve this problem (as I don't know which images have been viewed and marked with no annotations and which ones I have not seen in AIDE).
I know AIDE keeps track of this somewhere (since it only shows new images for annotation when Image Order is 'automatic') . I have tried accessing the postgre databases, but the ability to connect to the non-empty database that (I assume) has tables in it seems to be disabled and I don't want to risk accidentally dropping it.
Do you know if this is resolvable? Even a high level pointer would be really helpful!
Many thanks,
Matthew
Hi
I've been having an issue when trying to download data. After using the workflow designer to create inferences from the model and requesting the data on the "Data Download" page, a date comes up under the "Downloads" section but the corresponding file link doesn't show up with it. We also had some problems with getting the Access Control page to load, it seems to get stuck on the loading screen.
Thank you!
Hi! I am trying to set up AIDE on my local machine but I am having issues with setting up the database. Everything seems to work fine until I have to run python setup/setupDB.py
. I seem to have the .ini file set up properly, and the postgres database seems to have been set up correctly as well. But I am quite new to databases, so my ability to debug on my own is a little limited.
The error I get when I run python setup/setupDB.py
is the following:
Traceback (most recent call last):
File "setup/setupDB.py", line 125, in <module>
setupDB()
File "setup/setupDB.py", line 97, in setupDB
dbConn.execute(sql, None, None)
File "/home/vlucet/Documents/WILDLab/repos/AIDE/aerial_wildlife_detection/modules/Database/app.py", line 99, in execute
with self._get_connection() as conn:
File "/usr/lib/python3.8/contextlib.py", line 113, in __enter__
return next(self.gen)
File "/home/vlucet/Documents/WILDLab/repos/AIDE/aerial_wildlife_detection/modules/Database/app.py", line 89, in _get_connection
conn = self.connectionPool.getconn()
File "/home/vlucet/Documents/WILDLab/repos/AIDE/AIDEenv/lib/python3.8/site-packages/psycopg2/pool.py", line 169, in getconn
return self._getconn(key)
File "/home/vlucet/Documents/WILDLab/repos/AIDE/AIDEenv/lib/python3.8/site-packages/psycopg2/pool.py", line 93, in _getconn
return self._connect(key)
File "/home/vlucet/Documents/WILDLab/repos/AIDE/AIDEenv/lib/python3.8/site-packages/psycopg2/pool.py", line 63, in _connect
conn = psycopg2.connect(*self._args, **self._kwargs)
File "/home/vlucet/Documents/WILDLab/repos/AIDE/AIDEenv/lib/python3.8/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: connection to server at "localhost" (::1), port 5432 failed: FATAL: password authentication failed for user "aide_pop_os_34JT"
connection to server at "localhost" (::1), port 5432 failed: FATAL: password authentication failed for user "aide_pop_os_34JT"
Hi, I tried the repo, and found that it failed to start auto-train. The training status kept true and task status PENDING after one training( mannually started ) completed. I followed the instructionsran and ran all the services on a single machine with one AIworker. Was there something I missed for the problem?
It's a really interesting work to integrate annotation and training. I did not find a way to export the annotations in the code yet. It would be useful to add the function so that the annotations could be reused in other systems.
Thanks
Hi Beni
I've been working on annotations of wildebeest with Colin T and have been providing a tour of the system to our colleagues at the Giraffe Conservation Foundation. I've noticed that I seem to get "timed out" when annotations take a long time (~10 minutes) to complete/finalize an image. Annotations seem to be saved without issue when an image can be completed in 5 minutes or less.
Details: Upon clicking 'Next' after finishing an image annotation, I receive an 'Unexpected Error' window. I can resolve the issue by clicking 'Ok' to the error message and then continue to once again select the 'Next' button. I am then prompted with re-entering my login details. The system then seems to resolve my annotations without issue. Flagging this issue for you, but I think it's related to being timed out.
Jared
Hi,
I'm wondering if it's possible to set up the interface such that the file name can be displayed somewhere on the screen along with the image. I ask because the object in my images is very hard to see, and I want the technicians to be able to go look at some other files associated with the image file if necessary.
Thank you,
Mikey
This software is great and I'm excited to get using it! I have a few questions about extracting data; I want to ensure that I get this sorted out before having the data labeled.
aide_quiery_[date/time]
(pasted below).Contents of aide_query
file:
id; image; meta; label; username; autoConverted; timeCreated; timeRequired; unsure; username; viewcount; last_checked; last_time_required
c604558b-ab27-47e9-a532-56726a0a821c; ed79e933-a83a-4c40-9269-9f75c6cf0b03; {"browser": {"vendorSub": "", "productSub": "20030107", "vendor": "Google Inc.", "maxTouchPoints": 0, "hardwareConcurrency": 16, "cookieEnabled": true, "appCodeName": "Mozilla", "appName": "Netscape", "appVersion": "5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36", "platform": "Win32", "product": "Gecko", "userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36", "language": "en-US", "languages": ["en-US", "en"], "onLine": true, "doNotTrack": null, "geolocation": {}, "mediaCapabilities": {}, "connection": {}, "plugins": {"0": {"0": {}}, "1": {"0": {}}, "2": {"0": {}, "1": {}}}, "mimeTypes": {"0": {}, "1": {}, "2": {}, "3": {}}, "webkitTemporaryStorage": {}, "webkitPersistentStorage": {}, "userActivation": {}, "mediaSession": {}, "permissions": {}}, "windowSize": [2560, 1281], "uiControls": {"burstModeEnabled": false}}; 1ab5e9a8-c08c-11ea-bad2-0242ac160002; admin; False; 2020-07-08 16:57:31.985000+01:00; 493071; True; admin; 6; 2020-07-08 16:57:31.985000+01:00; 19283;
The 'priority' value that determines the importance of an image to be shown to users is currently attached to predictions instead of images. This means that images without any predictions are not prioritized in any way.
Solution: augment or move 'priority' to the image level.
A very interesting work! However, When I try to launch AIDE with the command './AIDE.sh start ', an error occur as the picture shows. Here is the screenshot:
I also attach my setting file
settings.txt
Anyone who has some suggestions on it?
Hi Ben - I met you a few weeks ago when you demoed AIDE for our AI for Conservation Slack group. I have been trying to use AIDE on an Azure VM (one of the Ubuntu Data Science Virtual Machines). I got it installed through docker following your instructions, but it doesnʻt seem to be launching from docker, in that the UI (in gunicorn?) is just not opening up. Once itʻs open, I also canʻt seem to kill it in the terminal without shutting down the VM. For complicated reasons, I am running the Ubuntu GUI through VNC Viewer and Bitvise on a different Windows VM, so maybe thatʻs the trouble? But other programs in there (Firefox) seem to be showing up correctly, so Iʻm a little stumped. Also, neither ^C in this terminal nor /.AIDE.sh stop from a different terminal seem to kill it.
I did not set those environment variables prior to launching in docker, so maybe thatʻs the problem? If I manually type in http://0.0.0.0:8080 to firefox, I do get the AIDE Label Interface Sign in screen...but also, I donʻt know how to sign in (I tried what I entered into the settings.ini file but that didnʻt work)?
Here is the terminal output:
sysctl: setting key "net.ipv4.tcp_keepalive_time": Read-only file system
sysctl: setting key "net.ipv4.tcp_keepalive_intvl": Read-only file system
sysctl: setting key "net.ipv4.tcp_keepalive_probes": Read-only file system
[2021-02-05 22:25:30 +0000] [1703] [INFO] Starting gunicorn 20.0.4
[2021-02-05 22:25:30 +0000] [1703] [INFO] Listening at: http://0.0.0.0:8080 (1703)
[2021-02-05 22:25:30 +0000] [1703] [INFO] Using worker: sync
[2021-02-05 22:25:30 +0000] [1709] [INFO] Booting worker with pid: 1709
[2021-02-05 22:25:30 +0000] [1710] [INFO] Booting worker with pid: 1710
^C
Has anyone successfully built this from scratch recently? I'm getting lots of issues with python modules not being found. Looks like pull request #25 tries to fix one of these (No module named 'kombu.five'
) by pinning celery
to 4.4.7 in docker/requirements.txt
, but that didn't work for me. Adding kombu == 4.6.11
to docker/requirements.txt
fixed the kombu.five
issue, but then I get the same thing with vine.five
. Adding vine==1.3.0
solves that, but then I get an issue with celery.task
. Can anyone with a working image supply a list of the correct module versions that I can add to the requirements.txt
file? I'm not particularly experienced with docker and python, so apologies if this is something simple that I'm stuffing up.
smacdonald@ordovicia:~/docker/cameratrapping/aerial_wildlife_detection/docker$ sudo docker-compose up
Recreating docker_aide_app_1 ... done
Attaching to docker_aide_app_1
aide_app_1 | Synchronizing state of redis-server.service with SysV service script with /lib/systemd/systemd-sysv-install.
aide_app_1 | Executing: /lib/systemd/systemd-sysv-install enable redis-server
aide_app_1 | Starting redis-server: redis-server.
aide_app_1 | =============================
aide_app_1 | Setup of database IS STARTING
aide_app_1 | =============================
aide_app_1 | * Restarting PostgreSQL 10 database server
aide_app_1 | ...done.
aide_app_1 | GRANT
aide_app_1 | NOTICE: extension "uuid-ossp" already exists, skipping
aide_app_1 | CREATE EXTENSION
aide_app_1 | GRANT
aide_app_1 | Traceback (most recent call last):
aide_app_1 | File "setup/setupDB.py", line 14, in <module>
aide_app_1 | from modules import Database, UserHandling
aide_app_1 | File "/home/aide/app/modules/__init__.py", line 24, in <module>
aide_app_1 | from .AIController.app import AIController
aide_app_1 | File "/home/aide/app/modules/AIController/app.py", line 9, in <module>
aide_app_1 | from modules.AIController.backend.middleware import AIMiddleware
aide_app_1 | File "/home/aide/app/modules/AIController/backend/middleware.py", line 23, in <module>
aide_app_1 | from modules.AIController.taskWorkflow.workflowTracker import WorkflowTracker
aide_app_1 | File "/home/aide/app/modules/AIController/taskWorkflow/workflowTracker.py", line 15, in <module>
aide_app_1 | from celery.task.control import revoke
aide_app_1 | ModuleNotFoundError: No module named 'celery.task'
aide_app_1 | Synchronizing state of postgresql.service with SysV service script with /lib/systemd/systemd-sysv-install.
aide_app_1 | Executing: /lib/systemd/systemd-sysv-install enable postgresql
aide_app_1 | * Starting PostgreSQL 10 database server
aide_app_1 | ...done.
aide_app_1 | ==============================
aide_app_1 | Setup of database IS COMPLETED
aide_app_1 | ==============================
Hi Beni
I'm just looking at the new code and I get an error after running the migrate aide script and opening the model marketplace tab. The script doesn't seem to include the annotation/prediction types in the model marketplace table
seems that maxnumimages_inference is always zero as it's missing from the parameter list here:
aerial_wildlife_detection/modules/ProjectAdministration/backend/middleware.py
Lines 232 to 235 in 2990463
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.