Using ChatGPT to write Python for you
Using GPT to write SQL queries for you
An example of using ChatGPT as a backend CRUD API - possibly a parody but still interesting
Comprehensive database of AIs https://theresanaiforthat.com/
Resumeworded: Improve your resume and LinkedIn profile https://www.resumeworded.com/
Cleanup: Remove any wanted object from your pictures https://cleanup.pictures/
Flair: Design branded content in a flash https://flair.ai/
Illustroke: Create killer vector images from text prompts https://illustroke.com/
Stockimg: Generate the perfect stock photo you need https://stockimg.ai/
Looka: Design your own beautiful brand https://looka.com/
StockAI: Massive collection of free, AI-generated stock photos https://www.stockai.com/
Lexica: Search a massive library of curated AI images https://lexica.art/
Beatoven: Create unique royalty-free music https://www.beatoven.ai/
Soundraw: Stop searching for the song you need. Create it. https://soundraw.io/
Synthesia: Create AI videos by simply typing in text. https://www.synthesia.io/
Let AI write the code for you!
https://github.com/features/copilot
GitHub Copilot uses the OpenAI Codex to suggest code and entire functions in real-time, right from your editor.
You can use pretrained models via a web service:
https://huggingface.co/docs/api-inference/index
Test and evaluate, for free, over 80,000 publicly accessible machine learning models, or your own private models, via simple HTTP requests, with fast inference hosted on Hugging Face shared infrastructure.
You can also download models to your machine and use them "offline."
OpenAI has an API that lets you invoke pretrained text and image generation models as a web service:
Access GPT-3, which performs a variety of natural language tasks, Codex, which translates natural language to code, and DALL·E, which creates and edits original images.
They charge for their API but you can get $18 in free credit when you sign up
CLIP can run on your machine and can be used as a "zero-shot" image classifier:
https://github.com/openai/CLIP
CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3. We found CLIP matches the performance of the original ResNet50 on ImageNet “zero-shot” without using any of the original 1.28M labeled examples, overcoming several major challenges in computer vision.