shifthackz / stable-diffusion-android Goto Github PK
View Code? Open in Web Editor NEWStable Diffusion AI client app for Android
Home Page: https://sdai.moroz.cc
License: GNU Affero General Public License v3.0
Stable Diffusion AI client app for Android
Home Page: https://sdai.moroz.cc
License: GNU Affero General Public License v3.0
Can you please add text to video model in local diffusion I really want to make videos offline,stable diffusion works better on my phone than my pc my pc is too weak to make videos ai so can you please add text to videos or even text to 1 or 2 seconds gif model pleaseeè.
Describe the bug
When trying to use a local A1111 instance I receive an error:
"Parameter specified as non-null is null: method com.shifthackz.aisdv1.domain.entitiy.ServerConfiguration., parameter sdModelCheckpoint
To Reproduce
Unknown as it always happens for me on multiple devices.
Things I have tried:
Simplified network so that A111 server and mobile are on same L2 segment.
Simplified startup flags to only "--api --listen"
Tried authed and anonymous access
Tried FOSS and Play Store version
Updated A1111 to latest
Expected behavior
Local A1111 usage works as described
Desktop (please complete the following information):
Smartphone (please complete the following information):
Is your feature request related to a problem? Please describe.
I tested a few of the local diffusion models and accidentaly clicked on delete, which instantly deleted the model.
Describe the solution you'd like
Add a popup to make sure the model isn't deleted by accident.
Describe alternatives you've considered
None
Many thanks for a good app, having the ability to perform on-device inference is very welcome.
Unfortunately the time required for each image generation is painful slow.
Qualcomm have recently announced the AI hub that includes specific APIs and models for on device inference.
https://aihub.qualcomm.com/models/stable_diffusion_quantized
Any plans to update the app to take advantage of the Qualcomm announcements?
Loading a custom is some kind of hard to implement when using safetensors files or ckpt files.
I hope there will be in the future a way to use safetensors files or ckpt files.
For now if there is another way to convert safetensors to onnx that would be good.
Please add option where we can add wildcards for easier prompting ,wildcards can be fun to use:)
Describe the bug
Error: HTTP 500 on generation attempt
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Image should generate
Screenshots
Desktop (please complete the following information):
Smartphone (please complete the following information):
Additional context
I'm pretty sure this is because I'm using cloudflare. Is there any way to bypass this, or should I be using localtunnel instead? I'm running this against my colab...
Is your feature request related to a problem? Please describe.
Add/enable local img2img feature, perhaps labeled as risky/experimental. I have stable diffusion setup inside chroot'ed debian in termux and it functions reasonably well but feels cluttered. Some devices these days can handle it!
Hide it in a developer menu with a disclaimer attached if you have to 💯
Describe alternatives you've considered
Chroot'ed SD server + webui in termux using ->
https://github.com/leejet/stable-diffusion.cpp
https://github.com/DaniAndTheWeb/sd.cpp-webui
User Story:
As a user, I would like to be able to access the images in my local image gallery.
To the background: The pictures are stored at present apparently in a local data base?! (I could not find the images in the Android file system).
The share function works - BUT: It's just too many clicks to save a picture to the phone.
So, in addition to the share function, another "Save" button would be useful here.
I have an idea u don't have to add text to video cause our phones probably can't handle it but image to video may be better cause we can animate the images we create with the app and also add an face fixer option cuz most pics with people the app won't create decent faces even with the new models u added. So maybe my phone is too weak but please add image to video.
Perhaps I'm just not seeing them or otherwise ignorant as to how to include them but it would be nice to be able to specify hi-res options, at least for sessions connected to LAN a1111 servers. Selecting different upscalers, choosing number of steps, denoising strength, etc. Many models are optimized to predict or generate at ~512x512 and thus we rely on the upscalers to make the generated images closer to modern HD resolutions. It would also be nice to mess with extensions such as Adetailer, but that's probably waaaaaaay more difficult. I'm really enjoying the app otherwise!
Is your feature request related to a problem? Please describe.
It would be neat if you could run a model locally on your phone, devices with a lot of RAM this could be possible perhaps. I think you would need to implement a Stable Diffusion backend in TensorFlow Lite, then convert the models. I understand this would be a massive undertaking and would only be useful to some users, but I still would think it would be neat! But I understand if this is too much to ask for, maybe someone else would be willing to implement it
Describe the solution you'd like
A backend for Stable Diffusion in TensorFlow Lite allowing images to be generated locally on very powerful phones
Describe alternatives you've considered
Theirs a few other frameworks that support NNAPI I think, these could also be used perhaps. You could also not use NNAPI and do everything on the CPU with something like NCNN in theory with a wrapper or something like that
Additional context
Referenced libraries:
https://developer.android.com/ndk/guides/neuralnetworks
https://www.tensorflow.org/lite
https://github.com/Tencent/ncnn
Proof of concept:
https://www.qualcomm.com/news/onq/2023/02/worlds-first-on-device-demonstration-of-stable-diffusion-on-android
Is your feature request related to a problem? Please describe.
I have installed multiple TIs(Embeddings), Loras, Lycoris and hypernetworks and there is no way that I can remember all of them.
Describe the solution you'd like
Maybe if its not much work on your end that adding a modal that is showing these options with search.
Describe alternatives you've considered
n/a
Can yall please add sdxl 1.0 local diffusion mode and make it where we can add our own loras and checkpoints from civitai. I'm enjoying this app but I can't get good looking faces. Please add sdxl 1.0 please it will make our images look better especially if we can add loras and checkpoints and add advanced options in local diffusion please! I really want this I always wanted to use sdxl on my phone local. Thx u guys for such a great app:)
Is your feature request related to a problem? Please describe.
when a local model is selected have text under that says advanced options.
Describe the solution you'd like
this option would have the ability to adjust the weights of the model, and steps.
Describe alternatives you've considered
as there is no alternitve ST that runs local on the hardware i am not sure what else i can do here. maybe run actual SD in Exagear?
Is your feature request related to a problem? Please describe.
Hi! @ShiftHackZ! Thank you very much for this application (and especially for the Ukrainian language). I tested it, it works, but I can say, that if you have slash at the end of URL (I tested it with my notepad in Colab, repository is https://github.com/anapnoe/stable-diffusion-webui-ux) then you can not connect to environment, maybe it's one of the reasons why some users can not connect. The project is very promising but first of all it really lacks inpaint and all the other standard features that webui has. Hopefully these things will be added as well. Thank you!
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
Smartphone (please complete the following information):
Additional context
Add any other context about the problem here.
Is your feature request related to a problem? Please describe.
I don't know the plethora of loras I have installed nor their unique call outs so I can just write prompts and hope I get their trigger word right.
Describe the solution you'd like
A drop down list to add them to the prompt, or better a tab that includes all their preview images that either adds to the prompt or copies trigger to clip board
Describe alternatives you've considered
Running a bash script to generate all the triggers to n a text or csv file that I can just use syncthing to sync to my phone.
have you considered supporting segmind models, they have some small(and tiny) sd and sdxl models that seem to be decent. 🙏
Hello, the application is really good, but it is really slow, maybe it could be better with the Mesa driver.
Is your feature request related to a problem? Please describe.
I can not add my A1111 instance because you need to authenticate beforehand using apache's basic auth, like the following url structure: https://username:[email protected]
Describe the solution you'd like
A way to be able to use the auth and proxy.
Describe alternatives you've considered
None, to be honest.
Describe the bug
I tried following the procedure described on the ap page but it doesn't work I had to add --share to get a public url.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
As written in the guide I expected to start the interface where to generate the images
Desktop (please complete the following information):
Smartphone (please complete the following information):
Describe the bug
When i try to generate an image using the app, i get an error 500 and in the logs of stable-diffusion-webui i see this:
*** API error: POST: http://192.168.178.220:9000/sdapi/v1/txt2img {'error': 'URLError', 'detail': '', 'body': '', 'errors': '<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)>'}
Traceback (most recent call last):
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/anyio/streams/memory.py", line 98, in receive
return self.receive_nowait()
^^^^^^^^^^^^^^^^^^^^^
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/anyio/streams/memory.py", line 93, in receive_nowait
raise WouldBlock
anyio.WouldBlock
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/middleware/base.py", line 78, in call_next
message = await recv_stream.receive()
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/anyio/streams/memory.py", line 118, in receive
raise EndOfStream
anyio.EndOfStream
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/config/02-sd-webui/webui/modules/api/api.py", line 186, in exception_handling
return await call_next(request)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/middleware/base.py", line 84, in call_next
raise app_exc
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/middleware/base.py", line 70, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/middleware/base.py", line 108, in __call__
response = await self.dispatch_func(request, call_next)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/02-sd-webui/webui/modules/api/api.py", line 150, in log_and_time
res: Response = await call_next(req)
^^^^^^^^^^^^^^^^^^^^
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/middleware/base.py", line 84, in call_next
raise app_exc
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/middleware/base.py", line 70, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/middleware/cors.py", line 84, in __call__
await self.app(scope, receive, send)
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/middleware/gzip.py", line 24, in __call__
await responder(scope, receive, send)
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/middleware/gzip.py", line 44, in __call__
await self.app(scope, receive, self.send_with_gzip)
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/fastapi/routing.py", line 237, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/fastapi/routing.py", line 165, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/concurrency.py", line 41, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/02-sd-webui/webui/modules/api/api.py", line 379, in text2imgapi
processed = process_images(p)
^^^^^^^^^^^^^^^^^
File "/config/02-sd-webui/webui/modules/processing.py", line 734, in process_images
res = process_images_inner(p)
^^^^^^^^^^^^^^^^^^^^^^^
File "/config/02-sd-webui/webui/modules/processing.py", line 808, in process_images_inner
sd_vae_approx.model()
File "/config/02-sd-webui/webui/modules/sd_vae_approx.py", line 53, in model
download_model(model_path, 'https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/download/v1.0.0-pre/' + model_name)
File "/config/02-sd-webui/webui/modules/sd_vae_approx.py", line 39, in download_model
torch.hub.download_url_to_file(model_url, model_path)
File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/torch/hub.py", line 611, in download_url_to_file
u = urlopen(req)
^^^^^^^^^^^^
File "/config/02-sd-webui/env/lib/python3.11/urllib/request.py", line 216, in urlopen
return opener.open(url, data, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/02-sd-webui/env/lib/python3.11/urllib/request.py", line 519, in open
response = self._open(req, data)
^^^^^^^^^^^^^^^^^^^^^
File "/config/02-sd-webui/env/lib/python3.11/urllib/request.py", line 536, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/02-sd-webui/env/lib/python3.11/urllib/request.py", line 496, in _call_chain
result = func(*args)
^^^^^^^^^^^
File "/config/02-sd-webui/env/lib/python3.11/urllib/request.py", line 1391, in https_open
return self.do_open(http.client.HTTPSConnection, req,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/02-sd-webui/env/lib/python3.11/urllib/request.py", line 1351, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)>
---
To Reproduce
Steps to reproduce the behavior:
Expected behavior
No error
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
Smartphone (please complete the following information):
Additional context
Add any other context about the problem here.
ref: https://monitor.f-droid.org/builds/log/com.shifthackz.aisdv1.app.foss/165#site-footer
Local APK:
com.shifthackz.aisdv1.app.foss_165.7z.001.zip
com.shifthackz.aisdv1.app.foss_165.7z.002.zip
(remove .zip
from both names, then unpack .001
)
Upstream APK: https://github.com/ShiftHackZ/Stable-Diffusion-Android/releases/tag/0.5.2
The difflog is 1.1Gb (!!):
sdai2.log.7z.001.zip
sdai2.log.7z.002.zip
sdai2.log.7z.003.zip
(remove .zip
from all names, then unpack .001
)
/LE: fyi https://gitlab.com/fdroid/fdroiddata/-/commit/4fc53aa63a3fbf644ed1a105fa4a1cd3c9e0c8a9
/LE2: added diff log
The inference is completed on the cloud server or local smart phone? It is import to me.
Describe the bug
When I try to access sd through the app it results in a 500 Internal server error while it works if i go to my servers ip
To Reproduce
Steps to reproduce the behavior:
./webui.sh --use-cpu all --precision full --no-half --skip-torch-cuda-test --api --listen
Expected behavior
To be connected without errors
Desktop (please complete the following information):
Smartphone (please complete the following information):
Describe the bug
As mentioned in the readme up to 2048 px should be supported but I can't set it. If I do so I see the message 'Minimum Size is 1024'.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Upto 2048 px in size
Smartphone (please complete the following information):
Additional context
I really need this
Is your feature request related to a problem? Please describe.
Me and most people find the base SD model to be pretty bad.
For example if you want to do anime/manga artstyle, it's just awful.
Describe the solution you'd like
Ability to import checkpoints (based on SD 1.5 arch) in .safetensors or .pth into the app
Let us use them for inference, and switch between them easily.
Describe alternatives you've considered
Maybe it's possible to replace the checkpoint used by default, in Android\data or whatever, but I won't mess with that.
Additional context
Please support importing SD 1.5 checkpoints and VAE
Maybe a way to easily retrieve them from HuggingFace/CivitAI with a nice UI
but not all have VAE, so it would pretty hard to make something that works with every model there
Please add option to request a batch
I'm trying b to tap stuff but the ui isn't working:( I was so ready for this new update! I reinstall twice but same buf nothing works can't tap local diffusion;horde ai or Automatic 1111 even demo switch not work
I understand its in beta but local diffusion just returns the answer ERROR: HTTP 503
To Reproduce
Steps to reproduce the behavior:
Expected Behavior
It would download the ai model.
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
Smartphone (please complete the following information):
Good afternoon. Thanks for your project. But I would like to ask you to add the possibility to use this https://github.com/oobabooga/text-generation-webui
The application works identically to automatic 1111. I would really like to use a good chat ai on the phone instead of chatgpt.
Thank you in advance and good luck!
P.S. For example, you can add tabs where we select automatic1111 or text generation ai
Describe the bug
When attempting to use NNAPI on at least my device (Pixel 6a) it fails run the model on the target device for whatever reason. I suspect this is due to the Tensor chips found in Pixel devices, it's a obviously rare and unusual CPU/TPU (based on Samsung Exynos apparently but with a cut down TPU added) so probably due to that I would imagine. Could be wrong though because as the app suggests it's a experimental feature on a already experimental backend might be unrelated to the TPU
To Reproduce
Steps to reproduce the behavior:
Expected behavior
It is able to load the model and hopefully be much faster in theory
Smartphone (please complete the following information):
ref: https://gitlab.com/fdroid/fdroiddata/-/jobs/6136043977#L1808 the difference is empty hence I smell an APK issue
Comparing directly:
$ apksigcopier compare sdai-foss-release-0.5.4.apk com.shifthackz.aisdv1.app.foss_168_signed.apk && echo OK
DOES NOT VERIFY
ERROR: APK Signature Scheme v2 signer #1: APK integrity check failed. CHUNKED_SHA256 digest mismatch. Expected: <3036ab79c1e6736c09688a098dffbc4855159cdd235ace29afac617c407ce6a1>, actual: <9e08aca38457ddad31ba19483ac3aeec852ede381c5ff887bce093482fd20cdc>
Error: failed to verify /tmp/tmpm9_v9asj/output.apk.
Is the APK aligned? Does it have all the needed signatures?
As a user, I would like to be able to save the prompts I use as a template so that I don't have to enter them manually each time.
As it said in the title I would love to be able to use this to manage SD remotely. I have my remote access setup using Cloudflare Tunnel with Cloudflare Zero Trust utilizing Oath authorization with GitHub login emails. Sounds complicated but it is incredibly streamlined. Simply if I am logged into my GitHub account, or other accounts I have allowed access to. I can easily navigate to specified domain as if I was connecting locally. However only those with specific emails connected to GitHub can do so. I love the security and how you hardly even notice it.
It would be amazing to somehow be able to use this client remotely. Are there any plans to allow what I said above to be possible?? Or potentially some way to make it work now??? Thank you
Is your feature request related to a problem? Please describe.
I have pretty much points on DreamStudio, and I'd like to use its API (which is official Stability API) in the app.
Describe the solution you'd like
Add StabilityAI as a new provider.
Describe alternatives you've considered
The only alternative is to use DreamStudio's official website.
Additional context
DreamStudio website: https://dreamstudio.ai/
Stability API documentation: https://platform.stability.ai/docs/getting-started/
https://platform.stability.ai/docs/api-reference/
https://platform.stability.ai/docs/features/
Thanks for the great work. I hope in the future the API will support more features.
I can successfully generate images, but even with "always save images" activated, they are only visible in the app gallery and I have to manually save them in the phone gallery. Is it possible to automatically save them there?
My problem is, that after a while, the app gallery is not visible anymore and then I have no access to the generated images at all. Screenshot of the app gallery showing the problem
it appears that the app wants 2 tokenizer_config.json in the same folder, which is impossible. On presentation/src/main/java/com/shifthackz/aisdv1/presentation/screen/setup/ServerSetupScreen.kt line 618 do you mean vocab.json?
Another question, I converted to ort successfully this, right here do I use the model.with_runtime_opt.ort or the normal one? and does the app needs the model_index.json?
Is your feature request related to a problem? Please describe.
I would like that the nsfw flag not be set to false
at all times in both Img2Img and Text2Img Payloads for Horde AI
Referencing files
true
, false
is just hardcoded for some reason)
Describe the solution you'd like
Add a simple config option (radio button) for nsfw on/off.
Is there anything changed?
I'm on Android 13 (Galaxy M52 5G), latest version 0.5.4, both google play and FOSS version.
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Saved file should be in directory list and gallery
Screenshots
Gallery not loading
Desktop (please complete the following information):
Smartphone (please complete the following information):
Additional context
Running automatic111/stable-diffusion-webui 1.7.0 as a render server.
Add A1111 built-in user authentication support.
I got this "java.lang.IllgelStateException: Expected BEGIN_OBJECT but was STRING at line 1 column 1 path $" when using image to image.
• What I did:
(my original image)
Prompt: animated city, ultra HD, extreme details, 8K, masterpiece, best prompt, best artist, best image, best AI
Negative prompt: blur, ugly, worst image, worst prompt, worst animated city, worst artist, worst quality
Width/Height: 512
Sampling Method: DPM++ 2M Karras
Seed: 1
Varitation seed: (empty)
Variation strength: 0
Sampling steps: 30
CFG Scale: 6
Denoising Strength: 0.75
Restore faces: (no check)
Please help me.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.