Giter Club home page Giter Club logo

Comments (56)

zhangjh avatar zhangjh commented on July 17, 2024 9

可以试试我的smartsearch,不过我是收费的。
https://play.google.com/store/apps/details?id=me.zhangjh.smart.search
上面那个picquery也太卷了吧,免费我不知道花大精力投入开发图什么,不尊重自己的劳动吗😂

from queryable.

sunlin-xiaonai avatar sunlin-xiaonai commented on July 17, 2024 6

i will have a try, now i am do somethings about model

from queryable.

greyovo avatar greyovo commented on July 17, 2024 6

I split the model into ImageEncoder and TextEncoder, like the author did, and successfully ran the demo on Android using PyTorch Mobile.

Running result under a Xiaomi 12S @ Snapdragon 8+ Gen 1 SoC:

  • Inference speed: around 10 seconds for 100 images and 250 seconds for 2000 images (much slower than that on iOS). All photos are 400*400px JPEGs.

  • The exported ImageEncoder and TextEncoder are also very large (335 MB and 242 MB, respectively).

The efforts I have tried before:

  1. Quantifying the model, but it failed because I am not familiar with Pytorch's quantization operations
  2. Converting to NNAPI mode for operation, but no luck.

I observed the device status in profile mode and found that the CPU usage is only about 45%. I am not sure if NPU was used during inference.

I'm still evaluating whether it's worth to keep going, or anyone could give any advice would be appreciated 😄

Refs:

from queryable.

greyovo avatar greyovo commented on July 17, 2024 6

Great news! The Android app (PicQuery) is now on Google Play for free, support both English and Chinese: https://play.google.com/store/apps/details?id=me.grey.picquery

The source code will be public soon, as I need to clean up something :)

@Young-Flash @williamlee1982 @mazzzystar @stakancheck

from queryable.

greyovo avatar greyovo commented on July 17, 2024 4

PicQuery is open-source now, see https://github.com/greyovo/PicQuery :)

from queryable.

Young-Flash avatar Young-Flash commented on July 17, 2024 3

@greyovo just go ahead, I am willing to help if needed.

from queryable.

greyovo avatar greyovo commented on July 17, 2024 2

Maybe you can briefly introduce the required technology stack and technical route, so that others can participate 😄

@Young-Flash In fact, most of the work relies on the Collab script provided by the author @mazzzystar, and I just exported the two encoders to Pytorch-JIT models respectively instead of CoreML.

Possible improvements in my opinion:

  1. Converting the model for NNAPI execution
  2. Pruning or quantifying the model, which requires a deep understanding of the model structure and may require retraining
  3. Distilling knowledge from the model, which requires familiarity with deep learning techniques and also need retraining
  4. Looking for other multimodal models similar to CLIP, but I searched around and couldn't find anything more efficient and smaller than Vit-B/32 :(

Perhaps the easiest way is to convert the model to be suitable for NNAPI in order to speed up executing encoders. I tried by following pytorch's official tutorial but failed. It seems to require an ARM64 processor PC. I'm not sure if I missed something.

from queryable.

greyovo avatar greyovo commented on July 17, 2024 2

I have made some progress in quantinazation with onnx. Please check my repo CLIP-android-demo for detail :)
@Young-Flash @mazzzystar

from queryable.

Young-Flash avatar Young-Flash commented on July 17, 2024 2

I verifed the onnx quantized model, code is here, result on my local machine is as follows:

model result
CLIP [[6.1091479e-02 9.3267566e-01 5.3717378e-03 8.6108845e-04]]
clip-image-encoder.onnx & clip-text-encoder.onnx [[6.1091259e-02 9.3267584e-01 5.3716768e-03 8.6109847e-04]]
clip-image-encoder-quant-int8.onnx & clip-text-encoder-quant-int8.onnx [[4.703762e-02 9.391219e-01 9.90335e-03 3.93698e-03]]

I think it is good to go.

from queryable.

Young-Flash avatar Young-Flash commented on July 17, 2024 2

yeah I have made it here.

from queryable.

zhangjh avatar zhangjh commented on July 17, 2024 2

Got here from my previous github issue there: microsoft/onnxruntime#16472, which linked this repo.

hi, have you solve that issue? and do you use OpenAI CLIP or Chinese-CLIP?

Yeah, I've been using Chinese-CLIP.
I've forgotten how to solve the issue cause it passed a long time. Maybe I overlooked it because I found that it worked well in the Android environment.

from queryable.

mazzzystar avatar mazzzystar commented on July 17, 2024 1

Hi I'm not good at Android dev, and my energy is also devoted to other projects, so I'd glad if someone could help : )

from queryable.

mazzzystar avatar mazzzystar commented on July 17, 2024 1

@greyovo I'm not sure if distillation is a good idea compared to quantinazation. Even you don't quantize the model, using FP16 would significantly increase model speed.

I might try to export a lightweight Android version if I have time in the future : ) Is ONNX okay?

from queryable.

greyovo avatar greyovo commented on July 17, 2024 1

btw, how about use NCNN to compile and deploy on andriod?

Deploying using NCNN needs good cpp and JNI develop skills, which I am not familiar with... Sorry.

from queryable.

Young-Flash avatar Young-Flash commented on July 17, 2024 1

Deploying using NCNN needs good cpp and JNI develop skills, which I am not familiar with... Sorry.

haha it doesn't matter at all, just a suggestion. you have make a great job!

from queryable.

mazzzystar avatar mazzzystar commented on July 17, 2024 1

@greyovo Thanks for your great work, would like to see an android app : )

from queryable.

Young-Flash avatar Young-Flash commented on July 17, 2024 1

CLIP don't support chinese well, see here, and I test a same image with chinese input(["老虎", "猫", "狗", "熊"]) and English input(["a tiger", "a cat", "a dog", "a bear"]), the logits value are [[0.09097634 0.18403262 0.24364232 0.4813488 ]] and [[0.04703762 0.9391219 0.00990335 0.00393698]] respectively, the chinese test result isn't ideal.

@mazzzystar How do you deal with chinese text input in Queryable?

I tried Chinese-CLIP and onnx quantized model today and got ideal result, code is here, result is as follows:

model result
Chinese-CLIP Chinese [[1.9532440e-03 9.9525285e-01 2.2442457e-03 5.4962368e-04]]
Chinese-CLIP English [[2.5376787e-03 9.9683857e-01 4.3544930e-04 1.8830669e-04]]
clip-cn-image-encoder.onnx & clip-cn-text-encoder.onnx Chinese [[1.9535627e-03 9.9525201e-01 2.2446462e-03 5.4973643e-04]]
clip-cn-image-encoder.onnx & clip-cn-text-encoder.onnx English [[2.5380836e-03 9.9683797e-01 4.3553708e-04 1.8835040e-04]]
clip-cn-image-encoder-quant-int8.onnx & clip-cn-text-encoder-quant-int8.onnx Chinese [[0.00884504 0.98652565 0.00179121 0.00283814]]
clip-cn-image-encoder-quant-int8.onnx & clip-cn-text-encoder-quant-int8.onnx English [[0.02240802 0.97132427 0.00435637 0.00191139]]

from queryable.

greyovo avatar greyovo commented on July 17, 2024 1

@Young-Flash Same here. 😢 But I found that, converting the onnx format to ort, and using *.with_runtime_opt.ort version, may bridge the result gap at a bit. See here and here .... Though the difference are still observed, querying results are acceptable. (I am using CLIP but not Chinese-CLIP)

And I also observed that the quantized model may yield this problem while the original model would not.

By the way, I have done the basic indexing and querying feature, but I am working on UI issues. I might replace the model with Chinese-CLIP in the near future.

from queryable.

greyovo avatar greyovo commented on July 17, 2024 1

I tried ort too but give up when I found the inference result different. After that I tried the resnet50 in Chinese-CLIP instead of vit, got the same result as python inference. Maybe the problem lies on the operator that vit model use?

I agreed.

Do you plan to share your code? I think Chinese-CLIP is good for try, which support chinese and english.

I will try with ChineseCLIP. I need to apply for a Software Copyright Certificate (aka. 软件著作权) to get it on the app market, and then I'll make it open source.

Feel free to let me know if there's anything I can help with.

Thanks in advance :) @Young-Flash

from queryable.

zhangjh avatar zhangjh commented on July 17, 2024 1

I developed an android app named smartSearch already. You guys can try using it.
https://play.google.com/store/apps/details?id=me.zhangjh.smart.search

from queryable.

zhangjh avatar zhangjh commented on July 17, 2024 1

Got here from my previous github issue there: microsoft/onnxruntime#16472, which linked this repo.

from queryable.

mazzzystar avatar mazzzystar commented on July 17, 2024 1

@greyovo Great!Will update the Android code and app details in the README after your work is complete. :)

from queryable.

greyovo avatar greyovo commented on July 17, 2024 1

@Baiyssy @nodis 感谢反馈!这些bug都注意到了,提到的这些功能大多数也会加上的,只是需要一点时间,最近比较忙。

@LXY1226 感谢支持!

from queryable.

mazzzystar avatar mazzzystar commented on July 17, 2024 1

@greyovo Great! I've added your Android repository link in the README.

from queryable.

Young-Flash avatar Young-Flash commented on July 17, 2024 1

You rock!!! @greyovo

It's time to close this issue, new discussion could make in PicQuery , thanks everyone 😄

from queryable.

x97425 avatar x97425 commented on July 17, 2024

I split the model into ImageEncoder and TextEncoder, like the author did, and successfully ran the demo on Android using PyTorch Mobile.

Running result under a Xiaomi 12S @ Snapdragon 8 Gen 2 SoC:

  • Inference speed: around 10 seconds for 100 images and 250 seconds for 2000 images (much slower than that on iOS). All photos are 400*400px JPEGs.
  • The exported ImageEncoder and TextEncoder are also very large (335 MB and 242 MB, respectively).

The efforts I have tried before:

  1. Quantifying the model, but it failed because I am not familiar with Pytorch's quantization operations
  2. Converting to NNAPI mode for operation, but no luck.

I observed the device status in profile mode and found that the CPU usage is only about 45%. I am not sure if NPU was used during inference.

I'm still evaluating whether it's worth to keep going, or anyone could give any advice would be appreciated 😄

Refs:

i've send you an email,pls check out if you have time. respect!

from queryable.

williamlee1982 avatar williamlee1982 commented on July 17, 2024

I split the model into ImageEncoder and TextEncoder, like the author did, and successfully ran the demo on Android using PyTorch Mobile.

Running result under a Xiaomi 12S @ Snapdragon 8+ Gen 1 SoC:

  • Inference speed: around 10 seconds for 100 images and 250 seconds for 2000 images (much slower than that on iOS). All photos are 400*400px JPEGs.
  • The exported ImageEncoder and TextEncoder are also very large (335 MB and 242 MB, respectively).

The efforts I have tried before:

  1. Quantifying the model, but it failed because I am not familiar with Pytorch's quantization operations
  2. Converting to NNAPI mode for operation, but no luck.

I observed the device status in profile mode and found that the CPU usage is only about 45%. I am not sure if NPU was used during inference.

I'm still evaluating whether it's worth to keep going, or anyone could give any advice would be appreciated 😄

Refs:

look forward to your work, really want this on Android

from queryable.

Young-Flash avatar Young-Flash commented on July 17, 2024

I split the model into ImageEncoder and TextEncoder, like the author did, and successfully ran the demo on Android using PyTorch Mobile.

Running result under a Xiaomi 12S @ Snapdragon 8+ Gen 1 SoC:

  • Inference speed: around 10 seconds for 100 images and 250 seconds for 2000 images (much slower than that on iOS). All photos are 400*400px JPEGs.
  • The exported ImageEncoder and TextEncoder are also very large (335 MB and 242 MB, respectively).

The efforts I have tried before:

  1. Quantifying the model, but it failed because I am not familiar with Pytorch's quantization operations
  2. Converting to NNAPI mode for operation, but no luck.

I observed the device status in profile mode and found that the CPU usage is only about 45%. I am not sure if NPU was used during inference.

I'm still evaluating whether it's worth to keep going, or anyone could give any advice would be appreciated 😄

Refs:

Maybe you can briefly introduce the required technology stack and technical route, so that others can participate 😄

from queryable.

mazzzystar avatar mazzzystar commented on July 17, 2024

Inference speed: around 10 seconds for 100 images and 250 seconds for 2000 images (much slower than that on iOS). All photos are 400*400px JPEGs.

@greyovo for the ViT-B-32 CLIP model, the required image size is 224x224, so maybe do some preprocess would make the indexing faster

from queryable.

greyovo avatar greyovo commented on July 17, 2024

@greyovo for the ViT-B-32 CLIP model, the required image size is 224x224, so maybe do some preprocess would make the indexing faster

@mazzzystar Thanks for the advice. I actually did preprocess (i.e. resizing to 224px, center croping, normalizing..., like CLIP's preprocess() function did) before encoding images. Since I tried with 3000px*2000px images and got the same result, I dont think that's the main problem :(

from queryable.

mazzzystar avatar mazzzystar commented on July 17, 2024

Pruning or quantifying the model, which requires a deep understanding of the model structure and may require retraining

I think you may be able to do quantization when exporting pytorch model. pytorch/pytorch#76726

from queryable.

greyovo avatar greyovo commented on July 17, 2024

I think you may be able to do quantization when exporting pytorch model. pytorch/pytorch#76726

@mazzzystar Yes I also tried quantization but encountered several problems that I cannot solve, and hence the quantization was failed, not to mention the NNAPI convertion (which needs a quantized model). I may later share a jupyter notebook I used to see if anyone would help.

from queryable.

greyovo avatar greyovo commented on July 17, 2024

An interesting thing is that I found some efforts on distilling the CLIP model:

At least they proved that knowledge distillation may be a feasible direction, but requires a notable effort to do so.

from queryable.

greyovo avatar greyovo commented on July 17, 2024

I might try to export a lightweight Android version if I have time in the future : ) Is ONNX okay?

@mazzzystar You are right! There's another option - ONNX. It seems they have complete docs and demos. So yes, it's worth a try! Thanks :)

from queryable.

stakancheck avatar stakancheck commented on July 17, 2024

@mazzzystar Hi, I didn't quite understand how things are with the android app. I would have taken up this project in my spare time and rewrote part of the logic on kotlin (KMP), but I would have needed the help of an AI specialist. Are there any developments in this direction?

from queryable.

greyovo avatar greyovo commented on July 17, 2024

@stakancheck The original model ViT-B/32 was too large for Android devices (see the discussion above) and hence the speed of encoding images into embeddings was much slower than those on iOS. So we are dealing with the model to see if me or @mazzzystar might export a light-weight version of the model to speed up executing and reduce the size of the model.

By the way, are u familiar with kotlin or jetpack compose? I am a starter in Android development (I used Flutter before) but I would love to help in building the app :)

from queryable.

Young-Flash avatar Young-Flash commented on July 17, 2024

I just exported the two encoders to Pytorch-JIT models respectively instead of CoreML.

@greyovo could you please share your code(including how to running on your Xiaomi 12S)?

btw, how about use NCNN to compile and deploy on andriod?

from queryable.

mazzzystar avatar mazzzystar commented on July 17, 2024

@Young-Flash
Queryable does not support Chinese. I traind a Chinese Text Encoder myself, the method is similar to Chinese-CLIP, you can refer to this article . However, the training data is not MIT open-sourced, so I could not provide the Chinese version of model weight, but you can convert the open-source Chinese text encoder above on your demand.

from queryable.

Young-Flash avatar Young-Flash commented on July 17, 2024

@Young-Flash Queryable does not support Chinese. I traind a Chinese Text Encoder myself, the method is similar to Chinese-CLIP, you can refer to this article . However, the training data is not MIT open-sourced, so I could not provide the Chinese version of model weight, but you can convert the open-source Chinese text encoder above on your demand.

I see, I found a demo which use chinese to query, I thought it was translating chinese into english, but I didn't find the relevant code here so I felt puzzled.

Chinese-CLIP is a pre-trained model with MIT license, The above clip-cn-image-encoder-quant-int8.onnx and clip-cn-text-encoder-quant-int8.onnx take 84.93 MB and 97.89 MB, while @greyovo's clip-image-encoder-quant-int8.onnx and clip-text-encoder-quant-int8.onnx take 91.2 MB and 61.3 MB. I think Chinese-CLIP after quantization is acceptable. So maybe we could use it to replace CLIP, how do you see?

from queryable.

mazzzystar avatar mazzzystar commented on July 17, 2024

@Young-Flash That's excately what I mean, notice that the Chinese-CLIP's arch(BERT) a little bit diffrent from ViT-B-32, you may adjust the jupyter notebook according to it.

from queryable.

williamlee1982 avatar williamlee1982 commented on July 17, 2024

guys, any update for Android version? really want it.

from queryable.

Young-Flash avatar Young-Flash commented on July 17, 2024

I am blocked by a weird onnxruntime issue, the text encoder run at android can get the same inference result as python while the vit image encoder can't.

from queryable.

Young-Flash avatar Young-Flash commented on July 17, 2024

@Young-Flash Same here. 😢 But I found that, converting the onnx format to ort, and using *.with_runtime_opt.ort version, may bridge the result gap at a bit. See here and here .... Though the difference are still observed, querying results are acceptable. (I am using CLIP but not Chinese-CLIP)

And I also observed that the quantized model may yield this problem while the original model would not.

By the way, I have done the basic indexing and querying feature, but I am working on UI issues. I might replace the model with Chinese-CLIP in the near future.

@greyovo I tried ort too but give up when I found the inference result different. After that I tried the resnet50 in Chinese-CLIP instead of vit, got the same result as python inference. Maybe the problem lies on the operator that vit model use?

Do you plan to share your code? I think Chinese-CLIP is good for try, which support chinese and english. Feel free to let me know if there's anything I can help with.

from queryable.

williamlee1982 avatar williamlee1982 commented on July 17, 2024

I developed an android app named smartSearch already. You guys can try using it. https://play.google.com/store/apps/details?id=me.zhangjh.smart.search.en

tried and didn't work, crush in a few seconds after start building index

from queryable.

zhangjh avatar zhangjh commented on July 17, 2024

I developed an android app named smartSearch already. You guys can try using it. https://play.google.com/store/apps/details?id=me.zhangjh.smart.search.en

tried and didn't work, crush in a few seconds after start building index

Could you feedback some device info? Which brand, which version?
The high probability is due to insufficient mobile phone memory, which caused an OOM issue.

from queryable.

williamlee1982 avatar williamlee1982 commented on July 17, 2024

I developed an android app named smartSearch already. You guys can try using it. https://play.google.com/store/apps/details?id=me.zhangjh.smart.search.en

tried and didn't work, crush in a few seconds after start building index

Could you feedback some device info? Which brand, which version? The high probability is due to insufficient mobile phone memory, which caused an OOM issue.

OnePlus 11, ColorOS13, with 16GB memory, should be ok to run.

from queryable.

Young-Flash avatar Young-Flash commented on July 17, 2024

Got here from my previous github issue there: microsoft/onnxruntime#16472, which linked this repo.

hi, have you solve that issue? and do you use OpenAI CLIP or Chinese-CLIP?

from queryable.

Young-Flash avatar Young-Flash commented on July 17, 2024

@greyovo nice UI, thanks your great work.
but seems can't work at my device(Honor 9x pro), index pics but can't no get any single pic after query.

from queryable.

nodis avatar nodis commented on July 17, 2024

好消息!Android应用程序_PicQuery_)现已免费上线Google Play,支持中英文版:https://play.google.com/store/apps/details?id=me.grey.picquery

源代码很快就会公开,因为我需要清理一些东西:)

@greyovo

大佬,索引相册时,每次扫描大概八九百张图片的时候就会发生闪退
环境:
手机:小米13ultra
Android版本:Android 13
MIUI版本:MIUI 14 V14.0.23.9.18.DEV开发版

java.io.FileNotFoundException: Failed to create image decoder with message 'invalid input'Input contained an error.
at android.database.DatabaseUtils.readExceptionWithFileNotFoundExceptionFromParcel(DatabaseUtils.java:151)
at android.content.ContentProviderProxy.openTypedAssetFile(ContentProviderNative.java:780)
at android.content.ContentResolver.openTypedAssetFileDescriptor(ContentResolver.java:2029)
at android.content.ContentResolver.openTypedAssetFile(ContentResolver.java:1934)
at android.content.ContentResolver.lambda$loadThumbnail$0(ContentResolver.java:4159)
at android.content.ContentResolver$$ExternalSyntheticLambda1.call(Unknown Source:10)
at android.graphics.ImageDecoder$CallableSource.createImageDecoder(ImageDecoder.java:550)
at android.graphics.ImageDecoder.decodeBitmapImpl(ImageDecoder.java:1870)
at android.graphics.ImageDecoder.decodeBitmap(ImageDecoder.java:1863)
at android.content.ContentResolver.loadThumbnail(ContentResolver.java:4158)
at android.content.ContentResolver.loadThumbnail(ContentResolver.java:4142)
at me.grey.picquery.domain.ImageSearcher.encodePhotoList(ImageSearcher.kt:116)
at me.grey.picquery.domain.ImageSearcher$encodePhotoList$1.invokeSuspend(Unknown Source:14)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.EventLoopImplBase.processNextEvent(EventLoop.common.kt:280)
at kotlinx.coroutines.BlockingCoroutine.joinBlocking(Builders.kt:85)
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking(Builders.kt:59)
at kotlinx.coroutines.BuildersKt.runBlocking(Unknown Source:1)
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking$default(Builders.kt:38)
at kotlinx.coroutines.BuildersKt.runBlocking$default(Unknown Source:1)
at me.grey.picquery.domain.AlbumManager$encodeAlbums$2.invokeSuspend(AlbumManager.kt:103)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:584)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:793)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:697)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:684)
Suppressed: kotlinx.coroutines.internal.DiagnosticCoroutineContextException: [androidx.compose.ui.platform.MotionDurationScaleImpl@e39da66, androidx.compose.runtime.BroadcastFrameClock@1141a7, StandaloneCoroutine{Cancelling}@7840654, AndroidUiDispatcher@61c79fd]

from queryable.

nodis avatar nodis commented on July 17, 2024

Great news! The Android app (PicQuery) is now on Google Play for free, support both English and Chinese: https://play.google.com/store/apps/details?id=me.grey.picquery

The source code will be public soon, as I need to clean up something :)

@greyovo

Hello, when indexing an album, there is a flicker every time around 800 or 900 images are scanned
Environment:
Mobile phone: Xiaomi 13ultra
Android version: Android 13
MIUI version: MIUI 14 V14.0.23.9.18.DEV development version

java.io.FileNotFoundException: Failed to create image decoder with message 'invalid input'Input contained an error.
at android.database.DatabaseUtils.readExceptionWithFileNotFoundExceptionFromParcel(DatabaseUtils.java:151)
at android.content.ContentProviderProxy.openTypedAssetFile(ContentProviderNative.java:780)
at android.content.ContentResolver.openTypedAssetFileDescriptor(ContentResolver.java:2029)
at android.content.ContentResolver.openTypedAssetFile(ContentResolver.java:1934)
at android.content.ContentResolver.lambda$loadThumbnail$0(ContentResolver.java:4159)
at android.content.ContentResolver$$ExternalSyntheticLambda1.call(Unknown Source:10)
at android.graphics.ImageDecoder$CallableSource.createImageDecoder(ImageDecoder.java:550)
at android.graphics.ImageDecoder.decodeBitmapImpl(ImageDecoder.java:1870)
at android.graphics.ImageDecoder.decodeBitmap(ImageDecoder.java:1863)
at android.content.ContentResolver.loadThumbnail(ContentResolver.java:4158)
at android.content.ContentResolver.loadThumbnail(ContentResolver.java:4142)
at me.grey.picquery.domain.ImageSearcher.encodePhotoList(ImageSearcher.kt:116)
at me.grey.picquery.domain.ImageSearcher$encodePhotoList$1.invokeSuspend(Unknown Source:14)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.EventLoopImplBase.processNextEvent(EventLoop.common.kt:280)
at kotlinx.coroutines.BlockingCoroutine.joinBlocking(Builders.kt:85)
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking(Builders.kt:59)
at kotlinx.coroutines.BuildersKt.runBlocking(Unknown Source:1)
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking$default(Builders.kt:38)
at kotlinx.coroutines.BuildersKt.runBlocking$default(Unknown Source:1)
at me.grey.picquery.domain.AlbumManager$encodeAlbums$2.invokeSuspend(AlbumManager.kt:103)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:584)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:793)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:697)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:684)
Suppressed: kotlinx.coroutines.internal.DiagnosticCoroutineContextException: [androidx.compose.ui.platform.MotionDurationScaleImpl@e39da66, androidx.compose.runtime.BroadcastFrameClock@1141a7, StandaloneCoroutine{Cancelling}@7840654, AndroidUiDispatcher@61c79fd]

from queryable.

zhangjh avatar zhangjh commented on July 17, 2024

from queryable.

mazzzystar avatar mazzzystar commented on July 17, 2024

@zhangjh
谢谢你告诉我它开源了,删除是因为我觉得它与本issue无关。
Thank you for letting me know it's open-sourced. I deleted it because I felt it was unrelated to this issue.

from queryable.

Baiyssy avatar Baiyssy commented on July 17, 2024

Great news! The Android app (PicQuery) is now on Google Play for free, support both English and Chinese: https://play.google.com/store/apps/details?id=me.grey.picquery

The source code will be public soon, as I need to clean up something :)

@Young-Flash @williamlee1982 @mazzzystar @stakancheck

太神奇了!作为第一版,完成度竟然已经这么高!
在我的小米9(Android 11)上几乎完美运行,索引了约1.2万张图片。
有几个小问题和建议:

  1. 索引某几个文件夹时会闪退。
  2. 少数情况下,搜索后点击搜索框右端的X号,焦点不在搜索框内,需要再点击搜索框才能输入。
  3. 搜索最多出来30张图片,可以出来更多结果吗?
  4. 图片浏览界面能增加个分享菜单就好了,现在只能查看。
  5. 搜索结果可以选择按相关度或是时间排序吗?
  6. 可以加上时间、位置筛选吗?
  7. 能否提供相关关键词?比如搜索日出,程序发现很多海上日出的照片,就提示相关关键字是海上日出。不过这个原理上似乎不支持。

谢谢!

from queryable.

LXY1226 avatar LXY1226 commented on July 17, 2024

Great job at all!
但是闪退的事情也有遇到,期待开源,立即去修,还有onnx的NNAPI似乎看起来可以直接用

from queryable.

Baiyssy avatar Baiyssy commented on July 17, 2024

@Baiyssy @nodis 感谢反馈!这些bug都注意到了,提到的这些功能大多数也会加上的,只是需要一点时间,最近比较忙。

@LXY1226 感谢支持!

似乎不会索引新增的图片?

from queryable.

greyovo avatar greyovo commented on July 17, 2024

似乎不会索引新增的图片?

@Baiyssy 是的,忘了考虑这个问题,目前版本是无法自动更新索引的,也没什么办法重建索引……后续会解决这个问题

from queryable.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.