Giter Club home page Giter Club logo

saveddit's Introduction

PyPI version license

saveddit is a bulk media downloader for reddit

pip3 install saveddit

Setting up authorization

        

        

These registrations will authorize you to use the Reddit and Imgur APIs to download publicly available information.

User configuration

The first time you run saveddit, you will see something like this:

foo@bar:~$ saveddit
Retrieving configuration from ~/.saveddit/user_config.yaml file
No configuration file found.
Creating one. Would you like to edit it now?
> Choose Y for yes and N for no

Once you choose 'yes', the program will request you to enter these credentials:

  • Your imgur client ID
  • Your reddit client ID
  • Your reddit client secret
  • Your reddit username

In case you choose 'no', the program will create a file which you can edit later, this is how to edit it:

  • Open the generated ~/.saveddit/user_config.yaml
  • Update the client IDs and secrets from the previous step
  • If you plan on using the user API, add your reddit username as well
imgur_client_id: '<YOUR_IMGUR_CLIENT_ID>'
reddit_client_id: '<YOUR_REDDIT_CLIENT_ID>'
reddit_client_secret: '<YOUR_REDDIT_CLIENT_SECRET>'
reddit_username: '<YOUR_REDDIT_USERNAME>'

Download from Subreddit

foo@bar:~$ saveddit subreddit -h
Retrieving configuration from /Users/pranav/.saveddit/user_config.yaml file

usage: saveddit subreddit [-h] [-f categories [categories ...]] [-l post_limit] [--skip-comments] [--skip-meta] [--skip-videos] -o output_path subreddits [subreddits ...]

positional arguments:
  subreddits            Names of subreddits to download, e.g., AskReddit

optional arguments:
  -h, --help            show this help message and exit
  -f categories [categories ...]
                        Categories of posts to download (default: ['hot', 'new', 'rising', 'controversial', 'top', 'gilded'])
  -l post_limit         Limit the number of submissions downloaded in each category (default: None, i.e., all submissions)
  --skip-comments       When true, saveddit will not save comments to a comments.json file
  --skip-meta           When true, saveddit will not save meta to a submission.json file on submissions
  --skip-videos         When true, saveddit will not download videos (e.g., gfycat, redgifs, youtube, v.redd.it links)
  --all-comments        When true, saveddit will download all the comments in a post instead of just downloading the top ones.)
  -o output_path        Directory where saveddit will save downloaded content
foo@bar:~$ saveddit subreddit pics -f hot -l 5 -o ~/Desktop
foo@bar:~$ tree -L 4 ~/Desktop/www.reddit.com
/Users/pranav/Desktop/www.reddit.com
└── r
    └── pics
        └── hot
            ├── 000_Prince_Philip_Duke_of_Edinburgh_...
            ├── 001_Day_10_of_Nobody_Noticing_the_Ap...
            ├── 002_First_edited_picture
            ├── 003_Reorganized_a_few_months_ago_and...
            └── 004_Van_Gogh_inspired_rainy_street_I...

You can download from multiple subreddits and use multiple filters:

foo@bar:~$ saveddit subreddit funny AskReddit -f hot top new rising -l 5 -o ~/Downloads/Reddit/.

The downloads from each subreddit to go to a separate folder like so:

foo@bar:~$ tree -L 3 ~/Downloads/Reddit/www.reddit.com
/Users/pranav/Downloads/Reddit/www.reddit.com
└── r
    ├── AskReddit
    │   ├── hot
    │   ├── new
    │   ├── rising
    │   └── top
    └── funny
        ├── hot
        ├── new
        ├── rising
        └── top

Download from anonymous Multireddit

To download from an anonymous multireddit, use the multireddit option and pass a number of subreddit names

foo@bar:~$ saveddit multireddit -h
usage: saveddit multireddit [-h] [-f categories [categories ...]] [-l post_limit] [--skip-comments] [--skip-meta] [--skip-videos] -o output_path subreddits [subreddits ...]

positional arguments:
  subreddits            Names of subreddits to download, e.g., aww, pics. The downloads will be stored in <OUTPUT_PATH>/www.reddit.com/m/aww+pics/.

optional arguments:
  -h, --help            show this help message and exit
  -f categories [categories ...]
                        Categories of posts to download (default: ['hot', 'new', 'random_rising', 'rising', 'controversial', 'top', 'gilded'])
  -l post_limit         Limit the number of submissions downloaded in each category (default: None, i.e., all submissions)
  --skip-comments       When true, saveddit will not save comments to a comments.json file
  --skip-meta           When true, saveddit will not save meta to a submission.json file on submissions
  --skip-videos         When true, saveddit will not download videos (e.g., gfycat, redgifs, youtube, v.redd.it links)
  -o output_path        Directory where saveddit will save downloaded content
foo@bar:~$ saveddit multireddit EarthPorn NaturePics -f hot -l 5 -o ~/Desktop

Anonymous multireddits are saved in www.reddit.com/m/<Multireddit_names>/<category>/ like so:

tree -L 4 ~/Desktop/www.reddit.com
/Users/pranav/Desktop/www.reddit.com
└── m
    └── EarthPorn+NaturePics
        └── hot
            ├── 000_Banning_State_Park_Minnesota_OC_...
            ├── 001_Misty_forest_in_the_mountains_of...
            ├── 002_One_of_the_highlights_of_my_last...
            ├── 003__OC_Japan_Kyoto_Garden_of_the_Go...
            └── 004_Sunset_at_Mt_Rainier_National_Pa...

Download from User's page

foo@bar:~$ saveddit user -h
usage: saveddit user [-h] users [users ...] {saved,gilded,submitted,multireddits,upvoted,comments} ...

positional arguments:
  users                 Names of users to download, e.g., Poem_for_your_sprog
  {saved,gilded,submitted,multireddits,upvoted,comments}

optional arguments:
  -h, --help            show this help message and exit

Here's a usage example for downloading all comments made by Poem_for_your_sprog

foo@bar:~$ saveddit user "Poem_for_your_sprog" comments -s top -l 5 -o ~/Desktop

Here's another example for downloading kemitche's multireddits:

foo@bar:~$ saveddit user kemitche multireddits -n reddit -f hot -l 5 -o ~/Desktop

User-specific content is downloaded to www.reddit.com/u/<Username>/... like so:

foo@bar:~$ tree ~/Desktop/www.reddit.com
/Users/pranav/Desktop/www.reddit.com
└── u
    ├── Poem_for_your_sprog
    │   ├── comments
    │   │   └── top
    │   │       ├── 000_Comment_my_name_is_Cow_and_wen_its_ni....json
    │   │       ├── 001_Comment_It_stopped_at_six_and_life....json
    │   │       ├── 002_Comment__Perhaps_I_could_listen_to_podca....json
    │   │       ├── 003_Comment__I_don_t_have_regret_for_the_thi....json
    │   │       └── 004_Comment__So_throw_off_the_chains_of_oppr....json
    │   └── user.json
    └── kemitche
        ├── m
        │   └── reddit
        │       └── hot
        │           ├── 000_When_posting_to_my_u_channel_NSF...
        │           │   ├── comments.json
        │           │   └── submission.json
        │           ├── 001_How_to_remove_popular_near_you
        │           │   ├── comments.json
        │           │   └── submission.json
        │           ├── 002__IOS_2021_13_0_Reddit_is_just_su...
        │           │   ├── comments.json
        │           │   └── submission.json
        │           ├── 003_The_Approve_User_button_should_n...
        │           │   ├── comments.json
        │           │   └── submission.json
        │           └── 004_non_moderators_unable_to_view_su...
        │               ├── comments.json
        │               └── submission.json
        └── user.json

Search and Download

saveddit support searching subreddits and downloading search results

foo@bar:~$ saveddit search -h
usage: saveddit search [-h] -q query [-s sort] [-t time_filter] [--include-nsfw] [--skip-comments] [--skip-meta] [--skip-videos] -o output_path subreddits [subreddits ...]

positional arguments:
  subreddits       Names of subreddits to search, e.g., all, aww, pics

optional arguments:
  -h, --help       show this help message and exit
  -q query         Search query string
  -s sort          Sort to apply on search (default: relevance, choices: [relevance, hot, top, new, comments])
  -t time_filter   Time filter to apply on search (default: all, choices: [all, day, hour, month, week, year])
  --include-nsfw   When true, saveddit will include NSFW results in search
  --skip-comments  When true, saveddit will not save comments to a comments.json file
  --skip-meta      When true, saveddit will not save meta to a submission.json file on submissions
  --skip-videos    When true, saveddit will not download videos (e.g., gfycat, redgifs, youtube, v.redd.it links)
  -o output_path   Directory where saveddit will save downloaded content

e.g.,

foo@bar:~$ saveddit search soccer -q "Chelsea" -o ~/Desktop

The downloaded search results are stored in www.reddit.com/q/<search_query>/<subreddits>/<sort>/.

foo@bar:~$ tree -L 4 ~/Desktop/www.reddit.com/q
/Users/pranav/Desktop/www.reddit.com/q
└── Chelsea
    └── soccer
        └── relevance
            ├── 000__Official_Results_for_UEFA_Champ...
            ├── 001_Porto_0_1_Chelsea_Mason_Mount_32...
            ├── 002_Crystal_Palace_0_2_Chelsea_Chris...
            ├── 003_Post_Match_Thread_Chelsea_2_5_We...
            ├── 004_Match_Thread_Porto_vs_Chelsea_UE...
            ├── 005_Crystal_Palace_1_4_Chelsea_Chris...
            ├── 006_Porto_0_2_Chelsea_Ben_Chilwell_8...
            ├── 007_Post_Match_Thread_Porto_0_2_Chel...
            ├── 008_UCL_Quaterfinalists_are_Bayern_D...
            ├── 009__MD_Mino_Raiola_and_Haaland_s_fa...
            ├── 010_Chelsea_2_5_West_Brom_Callum_Rob...
            ├── 011_Chelsea_1_2_West_Brom_Matheus_Pe...
            ├── 012__Bild_Sport_via_Sport_Witness_Ch...
            ├── 013_Match_Thread_Chelsea_vs_West_Bro...
            ├── 014_Chelsea_1_3_West_Brom_Callum_Rob...
            ├── 015_Match_Thread_Chelsea_vs_Atletico...
            ├── 016_Stefan_Savi�\207_Atlético_Madrid_str...
            ├── 017_Chelsea_1_0_West_Brom_Christian_...
            └── 018_Alvaro_Morata_I_ve_never_had_dep...

Supported Links:

  • Direct links to images or videos, e.g., .png, .jpg, .mp4, .gif etc.
  • Reddit galleries reddit.com/gallery/...
  • Reddit videos v.redd.it/...
  • Gfycat links gfycat.com/...
  • Redgif links redgifs.com/...
  • Imgur images imgur.com/...
  • Imgur albums imgur.com/a/... and imgur.com/gallery/...
  • Youtube links youtube.com/... and yout.be/...
  • These sites supported by youtube-dl
  • Self posts
  • For all other cases, saveddit will simply fetch the HTML of the URL

Contributing

Contributions are welcome, have a look at the CONTRIBUTING.md document for more information.

License

The project is available under the MIT license.

saveddit's People

Contributors

bwrst avatar kariuki-kithinji avatar l3str4nge avatar nickolaibeloguzov avatar p-ranav avatar theoneflop avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

saveddit's Issues

issue with files name

just a small problem, the " character is illegal in windows file name so the script crash when it encounters one.

Need error handling or processing of non media posts.

Getting the following error occasionally:

     * This is a redgif link
       - Looking for submission.preview.reddit_video_preview.fallback_url
Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/christopher/saveddit/saveddit/saveddit.py", line 65, in <module>
    main(args)
  File "/home/christopher/saveddit/saveddit/saveddit.py", line 31, in main
    downloader.download(args.o,
  File "/home/christopher/saveddit/saveddit/subreddit_downloader.py", line 141, in download
    self.download_gfycat_or_redgif(submission, files_dir)
  File "/home/christopher/saveddit/saveddit/subreddit_downloader.py", line 371, in download_gfycat_or_redgif
    if "reddit_video_preview" in submission.preview:
  File "/home/christopher/.local/lib/python3.8/site-packages/praw/models/reddit/base.py", line 35, in __getattr__
    return getattr(self, attribute)
  File "/home/christopher/.local/lib/python3.8/site-packages/praw/models/reddit/base.py", line 36, in __getattr__
    raise AttributeError(
AttributeError: 'Submission' object has no attribute 'preview'

After merging audio and video, audio and video stay around

When download a file that separates audio and video into 2 files after saveddit merges them into 1 file the individual audio and video files stay around. Is this intended functionality? or Can there be an option to only keep the merged file when downloading?

filenames for large multireddits

Hi, i've just encountered a problem. When i try to make an anonymous multireddit with about 90 subreddits in it the name of the folder generated throws this error:
[Errno 36] File name too long

Is there a way to bypass this?

http 401 error

hello, I have the following error :
`python -m saveddit.saveddit -r "Ebony" -f "new" -l 2000 -o "E:\E\D\saveddit\test"
.___ .. __
___________ ___ __ ____ | _/| _/|
|/ |_
/ /_ \ / // __ \ / __ |/ __ | | \
_ \ / __ \ /\ // // / // | | || |
/
_ >(____ /_/ _
_ >____ ____ | ||||
/ / / / /

Downloader for Reddit
version : v1.0.0
URL : https://github.com/p-ranav/saveddit

E:\E\D\saveddit\test
Downloading from /r/Ebony/new/
Traceback (most recent call last):
File "C:\Users\theo\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "C:\Users\theo\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\theo\Downloads\saveddit-master\saveddit-master\saveddit\saveddit.py", line 73, in
main(args)
File "C:\Users\theo\Downloads\saveddit-master\saveddit-master\saveddit\saveddit.py", line 32, in main
categories=args.f, post_limit=args.l, skip_videos=args.skip_videos, skip_meta=args.skip_meta, skip_comments=args.skip_comments)
File "C:\Users\theo\Downloads\saveddit-master\saveddit-master\saveddit\subreddit_downloader.py", line 74, in download
for i, submission in enumerate(category_function(limit=post_limit)):
File "C:\Users\theo\AppData\Local\Programs\Python\Python37\lib\site-packages\praw\models\listing\generator.py", line 63, in next
self._next_batch()
File "C:\Users\theo\AppData\Local\Programs\Python\Python37\lib\site-packages\praw\models\listing\generator.py", line 73, in _next_batch
self._listing = self._reddit.get(self.url, params=self.params)
File "C:\Users\theo\AppData\Local\Programs\Python\Python37\lib\site-packages\praw\reddit.py", line 566, in get
return self._objectify_request(method="GET", params=params, path=path)
File "C:\Users\theo\AppData\Local\Programs\Python\Python37\lib\site-packages\praw\reddit.py", line 672, in _objectify_request
path=path,
File "C:\Users\theo\AppData\Local\Programs\Python\Python37\lib\site-packages\praw\reddit.py", line 855, in request
json=json,
File "C:\Users\theo\AppData\Local\Programs\Python\Python37\lib\site-packages\prawcore\sessions.py", line 331, in request
url=url,
File "C:\Users\theo\AppData\Local\Programs\Python\Python37\lib\site-packages\prawcore\sessions.py", line 257, in _request_with_retries
url,
File "C:\Users\theo\AppData\Local\Programs\Python\Python37\lib\site-packages\prawcore\sessions.py", line 164, in _do_retry
retry_strategy_state=retry_strategy_state.consume_available_retry(), # noqa: E501
File "C:\Users\theo\AppData\Local\Programs\Python\Python37\lib\site-packages\prawcore\sessions.py", line 257, in _request_with_retries
url,
File "C:\Users\theo\AppData\Local\Programs\Python\Python37\lib\site-packages\prawcore\sessions.py", line 164, in _do_retry
retry_strategy_state=retry_strategy_state.consume_available_retry(), # noqa: E501
File "C:\Users\theo\AppData\Local\Programs\Python\Python37\lib\site-packages\prawcore\sessions.py", line 260, in _request_with_retries
raise self.STATUS_EXCEPTIONSresponse.status_code
prawcore.exceptions.InvalidToken: received 401 HTTP response`

this happened a first time at the 514th file and happened again as I retried.

Make saveddit a CL-callable module

Main issue:
Using python3 -m saveddit.saveddit [args] command is not very comfortable for multiple reasons.

Reasons:

  • You need to be in the same directory as saveddit which is an unnecessary step that can be eliminated
  • You need to call this module directly, which is a) can be confusing, b) can be eliminated

Solution:
Make this module callable from any place by creating a setup.py script and assembling this module into a python package. This way you can make saveddit available for domnload via PyPI - the largest Python project repository - via just a simple pip install saveddit command. To use this package, you'll need to just execute saveddit [args] command without changing your working directory. Also users can easily update your packages and you can modify its contents with ease

Add support for the XDG Base Directory Specification

This is a feature request for supporting the XDG Base Directory Specification.

The specification works around a bug during the early UNIX v2 rewrite which caused files prepended with a '.' to be ignored from the output of ls.
While this "bug" has become a feature for some, it has also become a headache for users when developers continue to assume HOME is a great place to dump configuration files and local caches.

To address these issues XDG Basedir was formed to give developers a standard location for these files and giving the users control over where they are placed in their HOME.

If you were to support the XDG specification the following locations would change:

Change ~/.saveddit/ to $XDG_CONFIG_HOME/saveddit and fall back to $HOME/.config/saveddit if XDG_CONFIG_HOME is not defined.

FileNotFoundError when download a post with a title that is truncated on windows

System:
Windows 10 64 bit.
Python 3.9.5

Steps to reproduce:
Run this on windows: saveddit subreddit pics -f top -l 5 -o .

As of today, the top post is this: https://old.reddit.com/r/pics/comments/haucpf/ive_found_a_few_funny_memories_during_lockdown/
Trying to download it gives this output:

#000 "I’ve found a few funny memories during lockdown. This is from my 1st tour in 89, backstage in Vegas."
     * Processing `https://i.redd.it/f58v4g8mwh551.jpg`
Traceback (most recent call last):
  File "c:\users\bad_g\appdata\local\programs\python\python39\lib\runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "c:\users\bad_g\appdata\local\programs\python\python39\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "C:\Users\bad_g\AppData\Local\Programs\Python\Python39\Scripts\saveddit.exe\__main__.py", line 7, in <module>
  File "c:\users\bad_g\appdata\local\programs\python\python39\lib\site-packages\saveddit\saveddit.py", line 346, in main    downloader.download(args.o,
  File "c:\users\bad_g\appdata\local\programs\python\python39\lib\site-packages\saveddit\subreddit_downloader.py", line 67, in download
    SubmissionDownloader(submission, i, self.logger, category_dir,
  File "c:\users\bad_g\appdata\local\programs\python\python39\lib\site-packages\saveddit\submission_downloader.py", line 68, in __init__
    files_dir = create_files_dir(submission_dir)
  File "c:\users\bad_g\appdata\local\programs\python\python39\lib\site-packages\saveddit\submission_downloader.py", line 62, in create_files_dir
    os.makedirs(files_dir)
  File "c:\users\bad_g\appdata\local\programs\python\python39\lib\os.py", line 225, in makedirs
    mkdir(name, mode)
FileNotFoundError: [WinError 3] The system cannot find the path specified: '.\\www.reddit.com\\r\\pics\\top\\000_I_ve_found_a_few_funny_memories_...\\files'
PS C:\Users\bad_g\Downloads\Saveddit>

Probably due to the fact that windows removes the ellipsis at the end of the directory automatically. Maybe add the possibility to remove the truncation and/or simply remove the "..." added to the end of the directory for windows.

Move client IDs and secrets in a separate configuration file

Main issue:
In your script files you store your client IDs and secrets as constants. This can pose a number of problems.

Main problems:

  • Sensetiva Data exposure. Client IDs and secrets are considered rather sensetiva data. Storing them as constans is highly discouraged as basically anyone can get a hold of them.
  • Difficult configuration. If you need to change/update your tokens, this can be complicated for the end user due to the fact that he needs to change it in the script files itself, which is rather discouraging.
  • Code redundancy. You define your credentials twice, thus making it harder to change it (you need to go to every file and manually update them, which is inefficient at best) and also you end up with basically the same variables, which is also inefficient.

Solution:
Move all this data in a separate configuration file (.yaml or .json) and create a function to parse it. That way you can store all your data in one place and retrieving it via a simple function call, thus making the process of updating it much more simpler, your code is now optimized (a bit, but still), and any end user feels more comfortable working with configuration file than with raw codebase.

If you do not mind, assign me please for this issue.

Thanks for this awesome project!

Scraping comments in order

Does this library scrape comments of a given post in the order of their occurrence without messing with the hierarchy? The praw library helps in scraping all the comments but they are not in order. Please let me know if this library can do that and the command I should use.

I used the command below and got an error:

python3 -m bdfr download ./path/to/output --all-comments -l "https://www.reddit.com/r/germany/comments/yydfai/what_is_your_opinion_of_graffiti_all_over_walls/"

Error: No such option: --all-comments

Thank you

[Help] Comments Limits & Possible no-duplicate

Hi!
I just downloaded this and was wondering how I could like remove the limit on how much comments it downloads? (the current limit is top comments only so)

Also, I was wondering how could I prevent it from re-downloading posts I already downloaded.

Thanks!

cant find comment karma

Traceback (most recent call last):
File "/home/wallmenis/.local/bin/saveddit", line 8, in
sys.exit(main())
^^^^^^
File "/home/wallmenis/.local/lib/python3.11/site-packages/saveddit/saveddit.py", line 360, in main
downloader.download_user_meta(args)
File "/home/wallmenis/.local/lib/python3.11/site-packages/saveddit/user_downloader.py", line 93, in download_user_meta
user_dict["comment_karma"] = user.comment_karma
^^^^^^^^^^^^^^^^^^
File "/home/wallmenis/.local/lib/python3.11/site-packages/praw/models/reddit/base.py", line 35, in getattr
return getattr(self, attribute)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wallmenis/.local/lib/python3.11/site-packages/praw/models/reddit/base.py", line 36, in getattr
raise AttributeError(
AttributeError: 'Redditor' object has no attribute 'comment_karma'

after running with

saveddit user wallmenis saved gilded submitted multireddits upvoted -o .

Permission Error

I am using Linux Mint 20.1. The following error occured.

Traceback (most recent call last):
File "/home/mobi/.local/bin/saveddit", line 8, in
sys.exit(main())
File "/home/mobi/.local/lib/python3.8/site-packages/saveddit/saveddit.py", line 68, in main
downloader.download(args.o,
File "/home/mobi/.local/lib/python3.8/site-packages/saveddit/subreddit_downloader.py", line 79, in download
os.makedirs(category_dir)
File "/usr/lib/python3.8/os.py", line 213, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/usr/lib/python3.8/os.py", line 213, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/usr/lib/python3.8/os.py", line 213, in makedirs
makedirs(head, exist_ok=exist_ok)
[Previous line repeated 2 more times]
File "/usr/lib/python3.8/os.py", line 223, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/Downloads'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.