Comments (56)
可以试试我的smartsearch,不过我是收费的。
https://play.google.com/store/apps/details?id=me.zhangjh.smart.search
上面那个picquery也太卷了吧,免费我不知道花大精力投入开发图什么,不尊重自己的劳动吗😂
from queryable.
i will have a try, now i am do somethings about model
from queryable.
I split the model into ImageEncoder
and TextEncoder
, like the author did, and successfully ran the demo on Android using PyTorch Mobile.
Running result under a Xiaomi 12S @ Snapdragon 8+ Gen 1 SoC:
-
Inference speed: around 10 seconds for 100 images and 250 seconds for 2000 images (much slower than that on iOS). All photos are 400*400px JPEGs.
-
The exported
ImageEncoder
andTextEncoder
are also very large (335 MB and 242 MB, respectively).
The efforts I have tried before:
- Quantifying the model, but it failed because I am not familiar with Pytorch's quantization operations
- Converting to NNAPI mode for operation, but no luck.
I observed the device status in profile mode and found that the CPU usage is only about 45%. I am not sure if NPU was used during inference.
I'm still evaluating whether it's worth to keep going, or anyone could give any advice would be appreciated 😄
Refs:
from queryable.
Great news! The Android app (PicQuery) is now on Google Play for free, support both English and Chinese: https://play.google.com/store/apps/details?id=me.grey.picquery
The source code will be public soon, as I need to clean up something :)
@Young-Flash @williamlee1982 @mazzzystar @stakancheck
from queryable.
PicQuery is open-source now, see https://github.com/greyovo/PicQuery :)
from queryable.
@greyovo just go ahead, I am willing to help if needed.
from queryable.
Maybe you can briefly introduce the required technology stack and technical route, so that others can participate 😄
@Young-Flash In fact, most of the work relies on the Collab script provided by the author @mazzzystar, and I just exported the two encoders to Pytorch-JIT models respectively instead of CoreML.
Possible improvements in my opinion:
- Converting the model for NNAPI execution
- Pruning or quantifying the model, which requires a deep understanding of the model structure and may require retraining
- Distilling knowledge from the model, which requires familiarity with deep learning techniques and also need retraining
- Looking for other multimodal models similar to CLIP, but I searched around and couldn't find anything more efficient and smaller than Vit-B/32 :(
Perhaps the easiest way is to convert the model to be suitable for NNAPI in order to speed up executing encoders. I tried by following pytorch's official tutorial but failed. It seems to require an ARM64 processor PC. I'm not sure if I missed something.
from queryable.
I have made some progress in quantinazation with onnx. Please check my repo CLIP-android-demo for detail :)
@Young-Flash @mazzzystar
from queryable.
I verifed the onnx quantized model, code is here, result on my local machine is as follows:
model | result |
---|---|
CLIP | [[6.1091479e-02 9.3267566e-01 5.3717378e-03 8.6108845e-04]] |
clip-image-encoder.onnx & clip-text-encoder.onnx | [[6.1091259e-02 9.3267584e-01 5.3716768e-03 8.6109847e-04]] |
clip-image-encoder-quant-int8.onnx & clip-text-encoder-quant-int8.onnx | [[4.703762e-02 9.391219e-01 9.90335e-03 3.93698e-03]] |
I think it is good to go.
from queryable.
yeah I have made it here.
from queryable.
Got here from my previous github issue there: microsoft/onnxruntime#16472, which linked this repo.
hi, have you solve that issue? and do you use OpenAI CLIP or Chinese-CLIP?
Yeah, I've been using Chinese-CLIP.
I've forgotten how to solve the issue cause it passed a long time. Maybe I overlooked it because I found that it worked well in the Android environment.
from queryable.
Hi I'm not good at Android dev, and my energy is also devoted to other projects, so I'd glad if someone could help : )
from queryable.
@greyovo I'm not sure if distillation is a good idea compared to quantinazation. Even you don't quantize the model, using FP16
would significantly increase model speed.
I might try to export a lightweight Android version if I have time in the future : ) Is ONNX okay?
from queryable.
btw, how about use NCNN to compile and deploy on andriod?
Deploying using NCNN needs good cpp and JNI develop skills, which I am not familiar with... Sorry.
from queryable.
Deploying using NCNN needs good cpp and JNI develop skills, which I am not familiar with... Sorry.
haha it doesn't matter at all, just a suggestion. you have make a great job!
from queryable.
@greyovo Thanks for your great work, would like to see an android app : )
from queryable.
CLIP don't support chinese well, see here, and I test a same image with chinese input(["老虎", "猫", "狗", "熊"]) and English input(["a tiger", "a cat", "a dog", "a bear"]), the logits value are [[0.09097634 0.18403262 0.24364232 0.4813488 ]] and [[0.04703762 0.9391219 0.00990335 0.00393698]] respectively, the chinese test result isn't ideal.
@mazzzystar How do you deal with chinese text input in Queryable
?
I tried Chinese-CLIP and onnx quantized model today and got ideal result, code is here, result is as follows:
model | result | |
---|---|---|
Chinese-CLIP | Chinese | [[1.9532440e-03 9.9525285e-01 2.2442457e-03 5.4962368e-04]] |
Chinese-CLIP | English | [[2.5376787e-03 9.9683857e-01 4.3544930e-04 1.8830669e-04]] |
clip-cn-image-encoder.onnx & clip-cn-text-encoder.onnx | Chinese | [[1.9535627e-03 9.9525201e-01 2.2446462e-03 5.4973643e-04]] |
clip-cn-image-encoder.onnx & clip-cn-text-encoder.onnx | English | [[2.5380836e-03 9.9683797e-01 4.3553708e-04 1.8835040e-04]] |
clip-cn-image-encoder-quant-int8.onnx & clip-cn-text-encoder-quant-int8.onnx | Chinese | [[0.00884504 0.98652565 0.00179121 0.00283814]] |
clip-cn-image-encoder-quant-int8.onnx & clip-cn-text-encoder-quant-int8.onnx | English | [[0.02240802 0.97132427 0.00435637 0.00191139]] |
from queryable.
@Young-Flash Same here. 😢 But I found that, converting the onnx
format to ort
, and using *.with_runtime_opt.ort
version, may bridge the result gap at a bit. See here and here .... Though the difference are still observed, querying results are acceptable. (I am using CLIP but not Chinese-CLIP)
And I also observed that the quantized model may yield this problem while the original model would not.
By the way, I have done the basic indexing and querying feature, but I am working on UI issues. I might replace the model with Chinese-CLIP in the near future.
from queryable.
I tried
ort
too but give up when I found the inference result different. After that I tried the resnet50 in Chinese-CLIP instead of vit, got the same result as python inference. Maybe the problem lies on the operator that vit model use?
I agreed.
Do you plan to share your code? I think Chinese-CLIP is good for try, which support chinese and english.
I will try with ChineseCLIP. I need to apply for a Software Copyright Certificate (aka. 软件著作权) to get it on the app market, and then I'll make it open source.
Feel free to let me know if there's anything I can help with.
Thanks in advance :) @Young-Flash
from queryable.
I developed an android app named smartSearch already. You guys can try using it.
https://play.google.com/store/apps/details?id=me.zhangjh.smart.search
from queryable.
Got here from my previous github issue there: microsoft/onnxruntime#16472, which linked this repo.
from queryable.
@greyovo Great!Will update the Android code and app details in the README after your work is complete. :)
from queryable.
@Baiyssy @nodis 感谢反馈!这些bug都注意到了,提到的这些功能大多数也会加上的,只是需要一点时间,最近比较忙。
@LXY1226 感谢支持!
from queryable.
@greyovo Great! I've added your Android repository link in the README.
from queryable.
You rock!!! @greyovo
It's time to close this issue, new discussion could make in PicQuery , thanks everyone 😄
from queryable.
I split the model into
ImageEncoder
andTextEncoder
, like the author did, and successfully ran the demo on Android using PyTorch Mobile.Running result under a Xiaomi 12S @ Snapdragon 8 Gen 2 SoC:
- Inference speed: around 10 seconds for 100 images and 250 seconds for 2000 images (much slower than that on iOS). All photos are 400*400px JPEGs.
- The exported
ImageEncoder
andTextEncoder
are also very large (335 MB and 242 MB, respectively).The efforts I have tried before:
- Quantifying the model, but it failed because I am not familiar with Pytorch's quantization operations
- Converting to NNAPI mode for operation, but no luck.
I observed the device status in profile mode and found that the CPU usage is only about 45%. I am not sure if NPU was used during inference.
I'm still evaluating whether it's worth to keep going, or anyone could give any advice would be appreciated 😄
Refs:
i've send you an email,pls check out if you have time. respect!
from queryable.
I split the model into
ImageEncoder
andTextEncoder
, like the author did, and successfully ran the demo on Android using PyTorch Mobile.Running result under a Xiaomi 12S @ Snapdragon 8+ Gen 1 SoC:
- Inference speed: around 10 seconds for 100 images and 250 seconds for 2000 images (much slower than that on iOS). All photos are 400*400px JPEGs.
- The exported
ImageEncoder
andTextEncoder
are also very large (335 MB and 242 MB, respectively).The efforts I have tried before:
- Quantifying the model, but it failed because I am not familiar with Pytorch's quantization operations
- Converting to NNAPI mode for operation, but no luck.
I observed the device status in profile mode and found that the CPU usage is only about 45%. I am not sure if NPU was used during inference.
I'm still evaluating whether it's worth to keep going, or anyone could give any advice would be appreciated 😄
Refs:
look forward to your work, really want this on Android
from queryable.
I split the model into
ImageEncoder
andTextEncoder
, like the author did, and successfully ran the demo on Android using PyTorch Mobile.Running result under a Xiaomi 12S @ Snapdragon 8+ Gen 1 SoC:
- Inference speed: around 10 seconds for 100 images and 250 seconds for 2000 images (much slower than that on iOS). All photos are 400*400px JPEGs.
- The exported
ImageEncoder
andTextEncoder
are also very large (335 MB and 242 MB, respectively).The efforts I have tried before:
- Quantifying the model, but it failed because I am not familiar with Pytorch's quantization operations
- Converting to NNAPI mode for operation, but no luck.
I observed the device status in profile mode and found that the CPU usage is only about 45%. I am not sure if NPU was used during inference.
I'm still evaluating whether it's worth to keep going, or anyone could give any advice would be appreciated 😄
Refs:
- (beta) Efficient mobile interpreter in Android and iOS — PyTorch Tutorials 2.0.1+cu117 documentation
- Pytorch Mobile Performance Recipes — PyTorch Tutorials 2.0.1+cu117 documentation
- https:pytorch.org/tutorials/prototype/nnapi_mobilenetv2.html
Maybe you can briefly introduce the required technology stack and technical route, so that others can participate 😄
from queryable.
Inference speed: around 10 seconds for 100 images and 250 seconds for 2000 images (much slower than that on iOS). All photos are 400*400px JPEGs.
@greyovo for the ViT-B-32
CLIP model, the required image size is 224x224
, so maybe do some preprocess would make the indexing faster
from queryable.
@greyovo for the
ViT-B-32
CLIP model, the required image size is224x224
, so maybe do some preprocess would make the indexing faster
@mazzzystar Thanks for the advice. I actually did preprocess (i.e. resizing to 224px, center croping, normalizing..., like CLIP's preprocess()
function did) before encoding images. Since I tried with 3000px*2000px images and got the same result, I dont think that's the main problem :(
from queryable.
Pruning or quantifying the model, which requires a deep understanding of the model structure and may require retraining
I think you may be able to do quantization when exporting pytorch model. pytorch/pytorch#76726
from queryable.
I think you may be able to do quantization when exporting pytorch model. pytorch/pytorch#76726
@mazzzystar Yes I also tried quantization but encountered several problems that I cannot solve, and hence the quantization was failed, not to mention the NNAPI convertion (which needs a quantized model). I may later share a jupyter notebook I used to see if anyone would help.
from queryable.
An interesting thing is that I found some efforts on distilling the CLIP model:
-
- It's an Android app using CLIP model. @x97425 @Young-Flash @williamlee1982 in case you may need it.
- It used a model distilling from CLIP. The speed can be doubled compared to the original
Vit-B/32
(that is, 100 pictures for ~5 seconds), but unfortunately, this will sacrifice accuracy (even unable to pass the example zero-shot prediction in CIFAR100).
-
- They outlined the general idea in the their blog but no specific code for distillation was provided.
At least they proved that knowledge distillation may be a feasible direction, but requires a notable effort to do so.
from queryable.
I might try to export a lightweight Android version if I have time in the future : ) Is ONNX okay?
@mazzzystar You are right! There's another option - ONNX. It seems they have complete docs and demos. So yes, it's worth a try! Thanks :)
from queryable.
@mazzzystar Hi, I didn't quite understand how things are with the android app. I would have taken up this project in my spare time and rewrote part of the logic on kotlin (KMP), but I would have needed the help of an AI specialist. Are there any developments in this direction?
from queryable.
@stakancheck The original model ViT-B/32 was too large for Android devices (see the discussion above) and hence the speed of encoding images into embeddings was much slower than those on iOS. So we are dealing with the model to see if me or @mazzzystar might export a light-weight version of the model to speed up executing and reduce the size of the model.
By the way, are u familiar with kotlin or jetpack compose? I am a starter in Android development (I used Flutter before) but I would love to help in building the app :)
from queryable.
I just exported the two encoders to Pytorch-JIT models respectively instead of CoreML.
@greyovo could you please share your code(including how to running on your Xiaomi 12S)?
btw, how about use NCNN to compile and deploy on andriod?
- reference: pnnx
from queryable.
@Young-Flash
Queryable does not support Chinese. I traind a Chinese Text Encoder myself, the method is similar to Chinese-CLIP, you can refer to this article . However, the training data is not MIT open-sourced, so I could not provide the Chinese version of model weight, but you can convert the open-source Chinese text encoder above on your demand.
from queryable.
@Young-Flash Queryable does not support Chinese. I traind a Chinese Text Encoder myself, the method is similar to Chinese-CLIP, you can refer to this article . However, the training data is not MIT open-sourced, so I could not provide the Chinese version of model weight, but you can convert the open-source Chinese text encoder above on your demand.
I see, I found a demo which use chinese to query, I thought it was translating chinese into english, but I didn't find the relevant code here so I felt puzzled.
Chinese-CLIP is a pre-trained model with MIT license, The above clip-cn-image-encoder-quant-int8.onnx
and clip-cn-text-encoder-quant-int8.onnx
take 84.93 MB and 97.89 MB, while @greyovo's clip-image-encoder-quant-int8.onnx
and clip-text-encoder-quant-int8.onnx
take 91.2 MB and 61.3 MB. I think Chinese-CLIP after quantization is acceptable. So maybe we could use it to replace CLIP, how do you see?
from queryable.
@Young-Flash That's excately what I mean, notice that the Chinese-CLIP's arch(BERT
) a little bit diffrent from ViT-B-32, you may adjust the jupyter notebook according to it.
from queryable.
guys, any update for Android version? really want it.
from queryable.
I am blocked by a weird onnxruntime issue, the text encoder run at android can get the same inference result as python while the vit image encoder can't.
from queryable.
@Young-Flash Same here. 😢 But I found that, converting the
onnx
format toort
, and using*.with_runtime_opt.ort
version, may bridge the result gap at a bit. See here and here .... Though the difference are still observed, querying results are acceptable. (I am using CLIP but not Chinese-CLIP)And I also observed that the quantized model may yield this problem while the original model would not.
By the way, I have done the basic indexing and querying feature, but I am working on UI issues. I might replace the model with Chinese-CLIP in the near future.
@greyovo I tried ort
too but give up when I found the inference result different. After that I tried the resnet50 in Chinese-CLIP instead of vit, got the same result as python inference. Maybe the problem lies on the operator that vit model use?
Do you plan to share your code? I think Chinese-CLIP is good for try, which support chinese and english. Feel free to let me know if there's anything I can help with.
from queryable.
I developed an android app named smartSearch already. You guys can try using it. https://play.google.com/store/apps/details?id=me.zhangjh.smart.search.en
tried and didn't work, crush in a few seconds after start building index
from queryable.
I developed an android app named smartSearch already. You guys can try using it. https://play.google.com/store/apps/details?id=me.zhangjh.smart.search.en
tried and didn't work, crush in a few seconds after start building index
Could you feedback some device info? Which brand, which version?
The high probability is due to insufficient mobile phone memory, which caused an OOM issue.
from queryable.
I developed an android app named smartSearch already. You guys can try using it. https://play.google.com/store/apps/details?id=me.zhangjh.smart.search.en
tried and didn't work, crush in a few seconds after start building index
Could you feedback some device info? Which brand, which version? The high probability is due to insufficient mobile phone memory, which caused an OOM issue.
OnePlus 11, ColorOS13, with 16GB memory, should be ok to run.
from queryable.
Got here from my previous github issue there: microsoft/onnxruntime#16472, which linked this repo.
hi, have you solve that issue? and do you use OpenAI CLIP or Chinese-CLIP?
from queryable.
@greyovo nice UI, thanks your great work.
but seems can't work at my device(Honor 9x pro), index pics but can't no get any single pic after query.
from queryable.
好消息!Android应用程序_PicQuery_)现已免费上线Google Play,支持中英文版:https://play.google.com/store/apps/details?id=me.grey.picquery
源代码很快就会公开,因为我需要清理一些东西:)
大佬,索引相册时,每次扫描大概八九百张图片的时候就会发生闪退
环境:
手机:小米13ultra
Android版本:Android 13
MIUI版本:MIUI 14 V14.0.23.9.18.DEV开发版
java.io.FileNotFoundException: Failed to create image decoder with message 'invalid input'Input contained an error.
at android.database.DatabaseUtils.readExceptionWithFileNotFoundExceptionFromParcel(DatabaseUtils.java:151)
at android.content.ContentProviderProxy.openTypedAssetFile(ContentProviderNative.java:780)
at android.content.ContentResolver.openTypedAssetFileDescriptor(ContentResolver.java:2029)
at android.content.ContentResolver.openTypedAssetFile(ContentResolver.java:1934)
at android.content.ContentResolver.lambda$loadThumbnail$0(ContentResolver.java:4159)
at android.content.ContentResolver$$ExternalSyntheticLambda1.call(Unknown Source:10)
at android.graphics.ImageDecoder$CallableSource.createImageDecoder(ImageDecoder.java:550)
at android.graphics.ImageDecoder.decodeBitmapImpl(ImageDecoder.java:1870)
at android.graphics.ImageDecoder.decodeBitmap(ImageDecoder.java:1863)
at android.content.ContentResolver.loadThumbnail(ContentResolver.java:4158)
at android.content.ContentResolver.loadThumbnail(ContentResolver.java:4142)
at me.grey.picquery.domain.ImageSearcher.encodePhotoList(ImageSearcher.kt:116)
at me.grey.picquery.domain.ImageSearcher$encodePhotoList$1.invokeSuspend(Unknown Source:14)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.EventLoopImplBase.processNextEvent(EventLoop.common.kt:280)
at kotlinx.coroutines.BlockingCoroutine.joinBlocking(Builders.kt:85)
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking(Builders.kt:59)
at kotlinx.coroutines.BuildersKt.runBlocking(Unknown Source:1)
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking$default(Builders.kt:38)
at kotlinx.coroutines.BuildersKt.runBlocking$default(Unknown Source:1)
at me.grey.picquery.domain.AlbumManager$encodeAlbums$2.invokeSuspend(AlbumManager.kt:103)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:584)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:793)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:697)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:684)
Suppressed: kotlinx.coroutines.internal.DiagnosticCoroutineContextException: [androidx.compose.ui.platform.MotionDurationScaleImpl@e39da66, androidx.compose.runtime.BroadcastFrameClock@1141a7, StandaloneCoroutine{Cancelling}@7840654, AndroidUiDispatcher@61c79fd]
from queryable.
Great news! The Android app (PicQuery) is now on Google Play for free, support both English and Chinese: https://play.google.com/store/apps/details?id=me.grey.picquery
The source code will be public soon, as I need to clean up something :)
Hello, when indexing an album, there is a flicker every time around 800 or 900 images are scanned
Environment:
Mobile phone: Xiaomi 13ultra
Android version: Android 13
MIUI version: MIUI 14 V14.0.23.9.18.DEV development version
java.io.FileNotFoundException: Failed to create image decoder with message 'invalid input'Input contained an error.
at android.database.DatabaseUtils.readExceptionWithFileNotFoundExceptionFromParcel(DatabaseUtils.java:151)
at android.content.ContentProviderProxy.openTypedAssetFile(ContentProviderNative.java:780)
at android.content.ContentResolver.openTypedAssetFileDescriptor(ContentResolver.java:2029)
at android.content.ContentResolver.openTypedAssetFile(ContentResolver.java:1934)
at android.content.ContentResolver.lambda$loadThumbnail$0(ContentResolver.java:4159)
at android.content.ContentResolver$$ExternalSyntheticLambda1.call(Unknown Source:10)
at android.graphics.ImageDecoder$CallableSource.createImageDecoder(ImageDecoder.java:550)
at android.graphics.ImageDecoder.decodeBitmapImpl(ImageDecoder.java:1870)
at android.graphics.ImageDecoder.decodeBitmap(ImageDecoder.java:1863)
at android.content.ContentResolver.loadThumbnail(ContentResolver.java:4158)
at android.content.ContentResolver.loadThumbnail(ContentResolver.java:4142)
at me.grey.picquery.domain.ImageSearcher.encodePhotoList(ImageSearcher.kt:116)
at me.grey.picquery.domain.ImageSearcher$encodePhotoList$1.invokeSuspend(Unknown Source:14)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.EventLoopImplBase.processNextEvent(EventLoop.common.kt:280)
at kotlinx.coroutines.BlockingCoroutine.joinBlocking(Builders.kt:85)
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking(Builders.kt:59)
at kotlinx.coroutines.BuildersKt.runBlocking(Unknown Source:1)
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking$default(Builders.kt:38)
at kotlinx.coroutines.BuildersKt.runBlocking$default(Unknown Source:1)
at me.grey.picquery.domain.AlbumManager$encodeAlbums$2.invokeSuspend(AlbumManager.kt:103)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:584)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:793)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:697)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:684)
Suppressed: kotlinx.coroutines.internal.DiagnosticCoroutineContextException: [androidx.compose.ui.platform.MotionDurationScaleImpl@e39da66, androidx.compose.runtime.BroadcastFrameClock@1141a7, StandaloneCoroutine{Cancelling}@7840654, AndroidUiDispatcher@61c79fd]
from queryable.
from queryable.
@zhangjh
谢谢你告诉我它开源了,删除是因为我觉得它与本issue无关。
Thank you for letting me know it's open-sourced. I deleted it because I felt it was unrelated to this issue.
from queryable.
Great news! The Android app (PicQuery) is now on Google Play for free, support both English and Chinese: https://play.google.com/store/apps/details?id=me.grey.picquery
The source code will be public soon, as I need to clean up something :)
太神奇了!作为第一版,完成度竟然已经这么高!
在我的小米9(Android 11)上几乎完美运行,索引了约1.2万张图片。
有几个小问题和建议:
- 索引某几个文件夹时会闪退。
- 少数情况下,搜索后点击搜索框右端的X号,焦点不在搜索框内,需要再点击搜索框才能输入。
- 搜索最多出来30张图片,可以出来更多结果吗?
- 图片浏览界面能增加个分享菜单就好了,现在只能查看。
- 搜索结果可以选择按相关度或是时间排序吗?
- 可以加上时间、位置筛选吗?
- 能否提供相关关键词?比如搜索日出,程序发现很多海上日出的照片,就提示相关关键字是海上日出。不过这个原理上似乎不支持。
谢谢!
from queryable.
Great job at all!
但是闪退的事情也有遇到,期待开源,立即去修,还有onnx的NNAPI似乎看起来可以直接用
from queryable.
@Baiyssy @nodis 感谢反馈!这些bug都注意到了,提到的这些功能大多数也会加上的,只是需要一点时间,最近比较忙。
@LXY1226 感谢支持!
似乎不会索引新增的图片?
from queryable.
似乎不会索引新增的图片?
@Baiyssy 是的,忘了考虑这个问题,目前版本是无法自动更新索引的,也没什么办法重建索引……后续会解决这个问题
from queryable.
Related Issues (20)
- Synology(群晖) NAS really needs this feature to search photos. HOT 13
- Method of calculating similarity HOT 2
- Could you please share the Chinese CLIP model used in 「寻隐」? HOT 3
- RuntimeError when convert to CoreML model HOT 2
- Search does not return the correct image when using iCloud Library HOT 1
- To quickly locate photos in the album HOT 1
- think about image-to-text HOT 4
- Feature: Image Library Auto Indexing HOT 1
- PhotoSearchModel.swift: "cosine_similarity" function HOT 3
- Is it possible to give support to devices running Android 9? HOT 1
- Using LSH to Accelerate Embedding Similarity Search HOT 1
- Similarities seem wrong HOT 1
- Adds favorites and saves to album features.
- [Feature Request] Export calculated embeddings from the app HOT 2
- Photo similarity look a little bit low HOT 2
- 实在无法注册一个Google账户,每次都被封,请问能不能给一个安卓版apk文件? HOT 2
- 目前已有一个win10版,哪位大佬能开发一个支持中文的win10版
- need version for Windows !
- 24000 images, memory leak caused crashes during construction.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from queryable.