Would you say Deep Learning models have become so good, that robust AI systems are no longer a dream, but a reality?
Do you think you can safely use the latest models published by researchers in any real-world problem, like self-driving cars? Or face recognition software at airports?
Convinced that machines are already better than humans at processing and understanding images?
I was too. Until I realized it is possible to deceive a state-of-the-art model, like DeepMind Perceiver, with a few lines of code
In this repo, I show you how.
๐ The Gap Between Research And Robust AI
You can run this tutorial on your local computer or directly on Colab.
Create a virtual environment for Python >= 3.8, and activate it:
$ (venv) git clone https://github.com/Paulescu/fooling_deepmind.git
$ (venv) pip install jupyter
$ (venv) jupyter notebook
Then open the notebook
Click on this button
If you want to learn more about real-world ML topics and become a better data scientist
๐ Subscribe to the datamachines newsletter.