Giter Club home page Giter Club logo

ultimascraper's Introduction

UltimaScraper (Python 3.10.1+)

Code style: black Twitter

app-token

27th January 2023 Migration

You can either start the script or create the __settings__ and __user_data__ folders manually.
~~~~~~~~
Move config.json file into "__settings__"
RENAME ".profiles" folder to "profiles" and move it into "__user_data__"

List of things I know that are broken:

Profile and header images aren't downloading
UI (Download Progress Bars to be exact)

Mandatory Tutorial

Read the #FAQ at the bottom of this page before submitting a issue.

Running the app locally

From the project folder open Windows Powershell/Terminal and run the commands below:

Installation commands:

Install Poetry

https://python-poetry.org/docs/

Update:

python updater.py

Start:

poetry run python start_us.py


Open and edit:

__user_data__/profiles/default/auth.json

[auth]

You have to fill in the following:

  • {"cookie":"cookie_value"}
  • {"x_bc":"x-bc_value"}
  • {"user_agent":"user-agent_value"}

Go to www.onlyfans.com and login, open the network debugger, then check the image below on how to get said above auth values. Using Chrome for this process is recommended, as other browsers sometimes have issues producing values that will auth properly.

app-token app-token

Your auth config should look similar to this

app-token

If you get auth attempt errors, only YOU can fix it unless you're willing to let me into your account so I can see if it's working or not.

Note: If active is set to False, the script will ignore the profile.

USAGE

poetry run python start_us.py

Enter in inputs as prompted by console.

OPTIONAL

Open:

config.json (Open with a texteditor)

[settings]

profile_directories:

Where your account information is stored (auth.json).

Default = ["__user_data__/profiles"]

If you're going to fill, please remember to use forward ("/") slashes only.

download_directories:

Where downloaded content is stored.

Default = ["__user_data__/sites"]

If you're going to fill, please remember to use forward ("/") slashes only.

You can add multiple directories and the script will automatically rollover to the next directory if the current is full.

metadata_directories:

Where metadata content is stored.

Default = ["__user_data__/sites"]

If you're going to fill, please remember to use forward ("/") slashes only.

Automatic rollover not supported yet.

path_formatting:

Overview for file_directory_format, filename_format and metadata_directory_format

{site_name} = The site you're scraping.

{first_letter} = First letter of the model you're scraping.

{post_id} = The posts' ID.

{media_id} = The media's ID.

{profile_username} = Your account's username.

{model_username} = The model's username.

{api_type} = Posts, Messages, etc.

{media_type} = Images, Videos, etc.

{filename} = The media's filename.

{value} = Value of the content. Paid or Free.

{text} = The media's text.

{date} = The post's creation date.

{ext} = The media's file extension.

Don't use the text variable. If you do, enjoy emojis in your filepaths and errors lmao.

file_directory_format:

This puts each media file into a folder.

The list below are unique identifiers that you must include.

You can choose one or more.

Default = "{site_name}/{model_username}/{api_type}/{value}/{media_type}"
Default Translated = "OnlyFans/belledelphine/Posts/Free/Images"

{model_username} = belledelphine

filename_format:

Usage: Format for a filename

The list below are unique identifiers that you must include.

You must choose one or more.

Default = "{filename}.{ext}"
Default Translated = "5fb5a5e4b4ce6c47ce2b4_source.mp4"

{filename} = 5fb5a5e4b4ce6c47ce2b4_source
{media_id} = 133742069

metadata_directory_format:

Usage: Filepath for metadata. It's tied with download_directories so ignore metadata_directories in the config.

The list below are unique identifiers that you must include.

You must choose one or more.

Default = "{site_name}/{model_username}/Metadata"
Default Translated = "OnlyFans/belledelphine/Metadata"

{model_username} = belledelphine

text_length:

Usage: When you use {text} in filename_format, a limit of how many characters can be set by inputting a number.

Default = ""
Ideal = "50"
Max = "255"

The ideal is actually 0.

video_quality:

Usage: Select the resolution of the video.

Default = "source"
720p = "720" | "720p"
240p = "240" | "240p"

auto_profile_choice:

Types: str|int

Usage: You can automatically choose which profile you want to scrape.

Default = ""

If you've got a profile folder named "user_one", set auto_profile_choice to "user_one" and it will choose it automatically.

auto_site_choice:

Types: list|str|bool

Usage: You can automatically choose which site you want to scrape.

Default = ""

Inputs: onlyfans, fansly

auto_media_choice:

Types: list|str|bool

Usage: You can automatically choose which media type you want to scrape.

Default = ""

Inputs: All, Images, Videos, etc

You can automatically choose which type of media you want to scrape.

auto_model_choice:

Types: list|str|bool

Default = false
Inputs: All, username, etc

If set to true, the script will scrape all the names.

auto_api_choice:

Default = true

If set to false, you'll be given the option to scrape individual apis.

jobs:

(Downloads)
"subscriptions" - This will scrape your standard content
"paid_content" - This will scrape paid content

If set to false, it won't do the job.

export_type:

Default = "json"

JSON = "json"

You can export an archive to different formats (not anymore lol).

overwrite_files:

Default = false

If set to true, any file with the same name will be redownloaded.

date_format:

Default = "%d-%m-%Y"

If you live in the USA and you want to use the incorrect format, use the following:

"%m-%d-%Y"

max_threads:

Default = -1

When number is set below 1, it will use all threads.
Set a number higher than 0 to limit threads.

min_drive_space:

Default = 0
Type: Float

Space is calculated in GBs.
0.5 is 500mb, 1 is 1gb,etc.
When a drive goes below minimum drive space, it will move onto the next drive or go into an infinite loop until drive is above the minimum space.

webhooks:

Default = []

Supported webhooks:
Discord

Data is sent whenever you've completely downloaded a model.
You can also put in your own custom url and parse the data.
Need another webhook? Open an issue.

exit_on_completion:

Default = false

If set to true the scraper run once and exit upon completion, otherwise the scraper will give the option to run again. This is useful if the scraper is being executed by a cron job or another script.

infinite_loop:

Default = true

If set to false, the script will run once and ask you to input anything to continue.

loop_timeout:

Default = 0

When infinite_loop is set to true this will set the time in seconds to pause the loop in between runs.

boards:

Default = []
Example = ["s", "gif"]

Input boards names that you want to automatically scrape.

ignored_keywords:

Default = []
Example = ["ignore", "me"]

Any words you input, the script will ignore any content that contains these words.

ignore_type:

Default = ""
a = "paid"
b = "free"

This setting will not include any paid or free accounts in your subscription list.

Example: "ignore_type": "paid"

This choice will not include any accounts that you've paid for.

export_metadata:

Default = true

Set to false if you don't want to save metadata.

blacklist_name:

Default = ""
Example = ["Blacklisted"]
Example = "Blacklisted,alsoforbidden"

This setting allows you to remove usernames when you choose the "scrap all" option by using lists or targetting specific usernames.

1. Go to https://onlyfans.com/my/lists and create a new list; you can name it whatever you want but I called mine "Blacklisted".
Add the list's name to the config.
Example: "blacklist_name": "Blacklisted"

2. Or simply put the username of the content creator in the list.

Other Tutorials:

Running the app via docker

Build and run the image, mounting the appropriate directories:

docker build -t only-fans . && docker run -it --rm --name onlyfans -v ${PWD}/__settings__:/usr/src/app/__settings__ -v ${PWD}/__user_data__:/usr/src/app/__user_data__ only-fans

Running on Linux

Running in Linux

FAQ:

Before troubleshooting, make sure you're using Python 3.10.1 and the latest commit of the script.

Error: Access Denied / Auth Loop

Quadrupal check that the cookies and user agent are correct. Remove 2FA.

I'm getting authed into the wrong account

Enjoy the free content. | This has been patched lol.

Do OnlyFans or OnlyFans models know I'm using this script?

OnlyFans may know that you're using this script, but I try to keep it as anon as possible.

Generally, models will not know unless OnlyFans tells them but other than that there is identifiable information in the metadata folder which contains your IP address, so don't share it unless you're using a proxy/vpn or just don't care.

Do you collect session information?

No. The code is on Github which allows you to audit the codebase yourself. You can use wireshark or any other network analysis program to verify the outgoing connections are respective to the modules you chose.

Serious Disclaimer (lmao):

OnlyFans is a registered trademark of Fenix International Limited 🤓☝️.

The contributors of this script isn't in any way affiliated with, sponsored by, or endorsed by Fenix International Limited 🤓☝️.

The contributors of this script are not responsible for the end users' actions... 🤓☝️.

ultimascraper's People

Contributors

aboredpervert avatar adamvorobyov avatar americanseeder1865 avatar andonandon avatar anotherofuser avatar banillasolt avatar casperdcl avatar cclauss avatar digitalcriminal avatar e0911cd45b19686 avatar ecchiecchi0 avatar helopy avatar jonathanunderwood62 avatar kozobot avatar kr33g33 avatar naxolotl avatar notarealemail avatar ood4rkl0rdoo avatar qtv avatar rakambda avatar reahari avatar resokou avatar rybackrulez avatar secretshell avatar sivra-d avatar sixinchfootlong avatar stranger-danger-zamu avatar throwaway-of avatar ultimahoarder avatar zymurnerd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ultimascraper's Issues

New header image for README.MD

I made this header image for the repo, stealing designs from here and there. If you like it you could put it on top of README.MD

Header

Auth Error

Not sure if I have editted the json file correct or not but I keep getting this error any help would be appreciated.
Traceback (most recent call last):
File "onlyfans.py", line 14, in
j_directory = json_data['directory']+"/Users/"
KeyError: 'directory'

ModuleNotFoundError: No module named 'requests'

Im quite the beginner with python.

I have installed all the requirements and have verified that the request module has been downloaded but im still getting this error.

Traceback (most recent call last):
File "C:\OnlyFans-master\OnlyFans.py", line 7, in
import requests
ModuleNotFoundError: No module named 'requests'

urllib.error.URLError: <urlopen error [WinError 10060]

Only having this issue when I try scrap profile with 1561 photos. I not using a vpn or proxy, have any ideas what else I can to fix this?

``Traceback (most recent call last):
File "C:\Python37\lib\urllib\request.py", line 1317, in do_open
encode_chunked=req.has_header('Transfer-encoding'))
File "C:\Python37\lib\http\client.py", line 1244, in request
self._send_request(method, url, body, headers, encode_chunked)
File "C:\Python37\lib\http\client.py", line 1290, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "C:\Python37\lib\http\client.py", line 1239, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "C:\Python37\lib\http\client.py", line 1026, in _send_output
self.send(msg)
File "C:\Python37\lib\http\client.py", line 966, in send
self.connect()
File "C:\Python37\lib\http\client.py", line 1406, in connect
super().connect()
File "C:\Python37\lib\http\client.py", line 938, in connect
(self.host,self.port), self.timeout, self.source_address)
File "C:\Python37\lib\socket.py", line 727, in create_connection
raise err
File "C:\Python37\lib\socket.py", line 716, in create_connection
sock.connect(sa)
TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "OnlyFans.py", line 170, in
scrape_choice()
File "OnlyFans.py", line 86, in scrape_choice
media_scraper(image_api, location, j_directory, only_links)
File "OnlyFans.py", line 145, in media_scraper
pool.starmap(download_media, product(media_set.items(), [directory]))
File "C:\Python37\lib\multiprocessing\pool.py", line 276, in starmap
return self._map_async(func, iterable, starmapstar, chunksize).get()
File "C:\Python37\lib\multiprocessing\pool.py", line 657, in get
raise self._value
File "C:\Python37\lib\multiprocessing\pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "C:\Python37\lib\multiprocessing\pool.py", line 47, in starmapstar
return list(itertools.starmap(args[0], args[1]))
File "OnlyFans.py", line 154, in download_media
urlretrieve(link, directory)
File "C:\Python37\lib\urllib\request.py", line 247, in urlretrieve
with contextlib.closing(urlopen(url, data)) as fp:
File "C:\Python37\lib\urllib\request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "C:\Python37\lib\urllib\request.py", line 525, in open
response = self._open(req, data)
File "C:\Python37\lib\urllib\request.py", line 543, in _open
'_open', req)
File "C:\Python37\lib\urllib\request.py", line 503, in _call_chain
result = func(*args)
File "C:\Python37\lib\urllib\request.py", line 1360, in https_open
context=self._context, check_hostname=self._check_hostname)
File "C:\Python37\lib\urllib\request.py", line 1319, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond>``

TimeoutError: [WinError 10060]

I still getting timeouts, It did output to the error log. Do you need info from error log?

Traceback (most recent call last):

  File "C:\Python37\lib\urllib\request.py", line 1317, in do_open
    encode_chunked=req.has_header('Transfer-encoding'))
  File "C:\Python37\lib\http\client.py", line 1244, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "C:\Python37\lib\http\client.py", line 1290, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "C:\Python37\lib\http\client.py", line 1239, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "C:\Python37\lib\http\client.py", line 1026, in _send_output
    self.send(msg)
  File "C:\Python37\lib\http\client.py", line 966, in send
    self.connect()
  File "C:\Python37\lib\http\client.py", line 1406, in connect
    super().connect()
  File "C:\Python37\lib\http\client.py", line 938, in connect
    (self.host,self.port), self.timeout, self.source_address)
  File "C:\Python37\lib\socket.py", line 727, in create_connection
    raise err
  File "C:\Python37\lib\socket.py", line 716, in create_connection
    sock.connect(sa)
TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "OnlyFans.py", line 221, in <module>
    scrape_choice()
  File "OnlyFans.py", line 87, in scrape_choice
    media_scraper(image_api, location, j_directory, only_links)
  File "OnlyFans.py", line 170, in media_scraper
    pool.starmap(download_media, product(media_set.items(), [directory]))
  File "C:\Python37\lib\multiprocessing\pool.py", line 276, in starmap
    return self._map_async(func, iterable, starmapstar, chunksize).get()
  File "C:\Python37\lib\multiprocessing\pool.py", line 657, in get
    raise self._value
  File "C:\Python37\lib\multiprocessing\pool.py", line 121, in worker
    result = (True, func(*args, **kwds))
  File "C:\Python37\lib\multiprocessing\pool.py", line 47, in starmapstar
    return list(itertools.starmap(args[0], args[1]))
  File "OnlyFans.py", line 192, in download_media
    urlretrieve(link, directory)
  File "C:\Python37\lib\urllib\request.py", line 247, in urlretrieve
    with contextlib.closing(urlopen(url, data)) as fp:
  File "C:\Python37\lib\urllib\request.py", line 222, in urlopen
    return opener.open(url, data, timeout)
  File "C:\Python37\lib\urllib\request.py", line 525, in open
    response = self._open(req, data)
  File "C:\Python37\lib\urllib\request.py", line 543, in _open
    '_open', req)
  File "C:\Python37\lib\urllib\request.py", line 503, in _call_chain
    result = func(*args)
  File "C:\Python37\lib\urllib\request.py", line 1360, in https_open
    context=self._context, check_hostname=self._check_hostname)
  File "C:\Python37\lib\urllib\request.py", line 1319, in do_open
    raise URLError(err)
urllib.error.URLError: <urlopen error [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond>

AttributeError: 'NoneType' object has no attribute 'select'

Traceback (most recent call last):
File "OnlyFans.py", line 142, in
user_id = link_check(input_link)
File "OnlyFans.py", line 38, in link_check
temp_user_id = user_list.select('a[data-user]')
AttributeError: 'NoneType' object has no attribute 'select'

urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1076)>

Trying to scrape images but I'm getting this following error

`Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 1317, in do_open
encode_chunked=req.has_header('Transfer-encoding'))
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1244, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1290, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1239, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1026, in _send_output
self.send(msg)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 966, in send
self.connect()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1414, in connect
server_hostname=server_hostname)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ssl.py", line 423, in wrap_socket
session=session
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ssl.py", line 870, in _create
self.do_handshake()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ssl.py", line 1139, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1076)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "onlyfans.py", line 170, in
scrape_choice()
File "onlyfans.py", line 86, in scrape_choice
media_scraper(image_api, location, j_directory, only_links)
File "onlyfans.py", line 145, in media_scraper
pool.starmap(download_media, product(media_set.items(), [directory]))
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py", line 276, in starmap
return self._map_async(func, iterable, starmapstar, chunksize).get()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py", line 657, in get
raise self._value
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py", line 47, in starmapstar
return list(itertools.starmap(args[0], args[1]))
File "onlyfans.py", line 154, in download_media
urlretrieve(link, directory)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 247, in urlretrieve
with contextlib.closing(urlopen(url, data)) as fp:
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 525, in open
response = self._open(req, data)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 543, in _open
'_open', req)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 503, in _call_chain
result = func(*args)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 1360, in https_open
context=self._context, check_hostname=self._check_hostname)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 1319, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1076)>`

Any fixes?

ValueError: Invalid isoformat string

Hey @DIGITALCRIMINAL, great job with the filename wildcards! There's an issue that on my end happened only when downloading videos: it downloads photos just fine but when going to videos the script throws this error:

Photos Finished
Traceback (most recent call last):
  File "/OnlyFans-master/OnlyFans.py", line 172, in <module>
    scrape_choice()
  File "/OnlyFans-master/OnlyFans.py", line 83, in scrape_choice
    media_scraper(video_api, location, j_directory, only_links)
  File "/OnlyFans-master/OnlyFans.py", line 126, in media_scraper
    dt = datetime.fromisoformat(media_api["postedAt"]).replace(tzinfo=None).strftime('%d-%m-%Y')
ValueError: Invalid isoformat string: '-001-11-30T00:00:00+00:00'

I managed to make it shut up and download the precious porn by nulling dt:
dt = "prettydolphins"

but I'd love to have the real date if it's possible!

Error Scraping

I am getting the error below when scraping. Seems like it verified but cant get the details of the media? Same error which ever options is chosen everything,image, video and even with -l

Scrape: a = Everything | b = Images | c = Videos
Optional Arguments: -l = Only scrape links -()- Example: "a -l"
c
Traceback (most recent call last):
  File "OnlyFans.py", line 170, in <module>
    scrape_choice()
  File "OnlyFans.py", line 91, in scrape_choice
    media_scraper(video_api, location, j_directory, only_links)
  File "OnlyFans.py", line 124, in media_scraper
    dt = datetime.fromisoformat(media_api["postedAt"]).replace(tzinfo=None).strftime('%d-%m-%Y')
AttributeError: type object 'datetime.datetime' has no attribute 'fromisoformat'

object is not iterable

root@ip:~/OnlyFans# python3 StartDatascraper.py
Auth (V1) Attempt 1/10
Access denied.
Auth (V1) Attempt 2/10
Access denied.
Auth (V1) Attempt 3/10
Welcome u****
Some OnlyFans' video links have SLOW download (Blame OF). I suggest importing the metadata json content to a Download Manager like IDM or JDownloader, or you could be waiting for 1HR+ for 300 videos to be finished.
Names: 0 = All | 1 = someonesusername
1
Invalid Choice
Traceback (most recent call last):
  File "StartDatascraper.py", line 92, in <module>
    session[0], username, site_name, app_token)
  File "/root/OnlyFans/modules/onlyfans.py", line 51, in start_datascraper
    for item in array:
TypeError: 'bool' object is not iterable
root@ip:~/OnlyFans# python3 --version
Python 3.6.8
root@ip:~/OnlyFans# pip3 --version
pip 9.0.1 from /usr/lib/python3/dist-packages (python 3.6)
root@ip:~/OnlyFans#

Date format

Anyway to change the date format to Y/M/D?

Getting Access denied

I have filled out the requirements inside settings.json, and I am still currently subscribed to the user in question. I'm also using the latest commit fa94513.

$ python OnlyFans.py
Input a username or profile link
https://onlyfans.com/yung_angel66
Access denied.
First time? Did you forget to edit your settings.json file?
Input a username or profile link

let me know how I can help you troubleshoot!

Script Isn't Accessing My OnlyFans Account

When loading up the script, the program refers to me by a different username than my own and I can't download anything from the accounts I'm subscribed to. I've double checked my config file and my app-token, auth_id, and auth_hash match the values shown in my browser.

wrong username

{DamnSon is not my username, it's an existing account that belongs to someone else}

Cant get tokens

Hello I tried after your method but I cant find anything.
What Network Debugger do you use? (Tried Firefox and Chrome)
Is there anything special to know?
Thank you,

TypeError: argument of type 'NoneType' is not iterable

I'm encountering the following with iamsweette:

Traceback (most recent call last):
File "C:\Users\USER\Desktop\OnlyFans-master\Start Datascraper.py", line 20, in
result = start_datascraper(session, app_token, username)
File "C:\Users\USER\Desktop\OnlyFans-master\modules\onlyfans.py", line 47, in start_datascraper
response = media_scraper(session, *item[1])
File "C:\Users\USER\Desktop\OnlyFans-master\modules\onlyfans.py", line 163, in media_scraper
media_set = pool.starmap(scrape_array, product(offset_array, [session]))
File "C:\Users\USER\AppData\Local\Programs\Python\Python37\lib\multiprocessing\pool.py", line 276, in starmap
return self._map_async(func, iterable, starmapstar, chunksize).get()
File "C:\Users\USER\AppData\Local\Programs\Python\Python37\lib\multiprocessing\pool.py", line 657, in get
raise self._value
File "C:\Users\USER\AppData\Local\Programs\Python\Python37\lib\multiprocessing\pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "C:\Users\USER\AppData\Local\Programs\Python\Python37\lib\multiprocessing\pool.py", line 47, in starmapstar
return list(itertools.starmap(args[0], args[1]))
File "C:\Users\USER\Desktop\OnlyFans-master\modules\onlyfans.py", line 135, in scrape_array
if "ca2.convert" in file:
TypeError: argument of type 'NoneType' is not iterable

Not grabbing all OF posts

Hi, thanks for this script! I ran it earlier but it looks like it isn't scraping all the content for a given user. The profile shows 585 photos and 157 videos, but the script only downloaded 133 photos and 100 videos. Any idea what the issue could be?

Access Denied

Just updated to the latest version and updated config.json. I'm getting this:

Site: 0 = onlyfans | 1 = justforfans
0
Access denied.

Can't install requirements

This is what I get when I try to install requirements in Linux:

Requirement already satisfied: requests in /media/sdw1/holonet81/.local/lib/python2.7/site-packages (from -r requirements.txt (line 1)) (2.22.0)
Collecting beautifulsoup4 (from -r requirements.txt (line 2))
Using cached https://files.pythonhosted.org/packages/f9/d9/183705a87492249b212d88eef740995f55076195bcf45ed59306c146e42d/beautifulsoup4-4.8.1-py2-none-any.whl
Requirement already satisfied: urllib3 in /media/sdw1/holonet81/.local/lib/python2.7/site-packages (from -r requirements.txt (line 3)) (1.25.6)
Collecting win32-setctime (from -r requirements.txt (line 4))
ERROR: Could not find a version that satisfies the requirement win32-setctime (from -r requirements.txt (line 4)) (from versions: none)
ERROR: No matching distribution found for win32-setctime (from -r requirements.txt (line 4))

What am I doing wrong here?

AttributeError: 'NoneType' object has no attribute 'find'

When searching for someone, it will fail and output the following:

Traceback (most recent call last):
  File "OnlyFans.py", line 109, in <module>
    user_id = link_check(input_link)
  File "OnlyFans.py", line 38, in link_check
    temp_user_id = html.find("div", {"class": "b-users"}).find("a", attrs={"data-user", True})
AttributeError: 'NoneType' object has no attribute 'find'

Settings.json filled, still not continuing

Hi,

I've filled out everything inside settings.json and when I run the script, it repeatedly asks for a username or profile link.

$ python OnlyFans.py
Input a username or profile link
https://onlyfans.com/yung_angel66
No users found
First time? Did you forget to edit your settings.json file?
Input a username or profile link
https://onlyfans.com/yung_angel66
No users found
First time? Did you forget to edit your settings.json file?
Input a username or profile link

Saying all videos are downloaded when they're not

When i run the script and let it run, it took around 1h 30mins to get 350~ videos from an onlyfans account then it finished and said it had grabbed them all. There is roughly 2600 videos on said only fans account. Shall i just re run the script with "overwrite_files" set to false and see if it grabs more videos and keep doing it till i all have been downloaded?

Dates don't work an MacOS

win32-setctime doesn't run on MacOS so you have to remove it from the script, and you just end up with randomly named videos.

Error in scrapping videos

Scrape: a = Everything | b = Images | c = Videos
Optional Arguments: -l = Only scrape links -()- Example: "a -l"
c -l
Traceback (most recent call last):
File "onlyfans.py", line 170, in
scrape_choice()
File "onlyfans.py", line 91, in scrape_choice
media_scraper(video_api, location, j_directory, only_links)
File "onlyfans.py", line 117, in media_scraper
for media in media_api["media"]:
TypeError: string indices must be integers

Question

I'm probably doing something wrong on my end, but I'm using linux and this is the error I'm getting, ./StartDatascraper.py: line 2: syntax error near unexpected token (' ./StartDatascraper.py: line 2: path = os.path.dirname(os.path.realpath(file))'

When I run the script it gives me like a cross cursor, and then it takes a screenshot it appears, if that helps. I believed I followed the instructions correctly but perhaps I'm just inputting something wrong somewhere.

Ignore Existing Files

It looks like pre-existing media is overwritten instead of ignored. Is it possible to add that as an option when running the script?

Filename suggestion

We all know what media downloaded from OnlyFans looks like and it's not pretty… It would be great to add some templating like in youtube-dl in settings.json or even hardcoded, for example:

Users/{username}/{date} - {text} - {carousel-index}.{ext}

Which will become
Users/miamalkova/2019-08-08 - Pink hair... don't care!.mp4

I see in the API requests we have those:

postedAt: "2019-08-08T12:54:09+00:00",
text: "Pink hair... don't care!"

I think having the date in the filename would work wonder for consuming the downloaded media as it mirrors the profile order. The text could even not be in the filename but in a separate .json/.xml file for archival purposes.

AttributeError: type object 'datetime.datetime' has no attribute 'fromisoformat'

Stack trace below, Python 3.7.1 on Windows 64bit.

Scraping Images. Should take less than a minute. Traceback (most recent call last): File "start.py", line 55, in <module> result = x.start_datascraper(session, username, app_token) File "D:\OFRip\modules\onlyfans.py", line 49, in start_datascraper response = media_scraper(session, *item[1]) File "D:\OFRip\modules\onlyfans.py", line 168, in media_scraper media_set = pool.starmap(scrape_array, product(offset_array, [session])) File "C:\Users\halo6\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\pool.py", line 268, in starmap return self._map_async(func, iterable, starmapstar, chunksize).get() File "C:\Users\halo6\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\pool.py", line 608, in get raise self._value File "C:\Users\halo6\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "C:\Users\halo6\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\pool.py", line 47, in starmapstar return list(itertools.starmap(args[0], args[1])) File "D:\OFRip\modules\onlyfans.py", line 148, in scrape_array dt = datetime.fromisoformat(media_api["postedAt"]).replace(tzinfo=None).strftime( AttributeError: type object 'datetime.datetime' has no attribute 'fromisoformat'

Add ability to quit/stop the downloads using keyboard

Currently there is no way to stop a download short of killing the terminal the application is running in. Could you please add an option to stop the downloads and quit the application using the keyboard? e.g "ctrl + c"

BUG: User Not Found

settings.json has been properly populated. After installing requirements and running python OnlyFans.py and then inputting a username, the return is User Not Found. I've tried running with username, profile link, userid from https://onlyfans.com/api2/v2/users/ and the preceding link with the userid. All return the same.

[macOS] win32_setctime: This function is only available for the Windows platform.

Hey dude! Great work on the last commits, the scraper's performance has become amazing.
Your new dependency win32_setctime obviously works only on Windows, on macOS it halts the script.

  File "/usr/local/lib/python3.7/site-packages/win32_setctime.py", line 44, in setctime
    raise OSError("This function is only available for the Windows platform.")
OSError: This function is only available for the Windows platform.

I commented line 206 to make it work.

        # setctime(directory, timestamp)

Script is not running anymore

Hi, I'm getting this error

Traceback (most recent call last):
File "OnlyFans.py", line 142, in
user_id = link_check(input_link)
File "OnlyFans.py", line 38, in link_check
temp_user_id = user_list.select('a[data-user]')
AttributeError: 'NoneType' object has no attribute 'select'

Cannot load links.json in excel anymore

I'm getting this error in excel when I tried to convert the json file to a table

Expression.Error: We cannot convert a value of type List to type Record.
Details:
Value=[List]
Type=[Type]

Access denied

Hi now I got Phyton running but when I choose 0 for onlyfans it always says that the Access is denied. Checked Hash and all other parameters but its all right. Any Suggestions?
Thank you.

Getting "Access Denied" errors

Tried multiple accounts and got the auth creds but still getting Access Denied, wonder if something changed in the login process.

UnicodeEncodeError: 'latin-1' codec can't encode character '\u20ac'

Traceback (most recent call last):
  File "OnlyFans.py", line 231, in <module>
    user_id = link_check()
  File "OnlyFans.py", line 58, in link_check
    r = session.get(link)
  File "D:\Users\<USERNAME>\AppData\Local\Programs\Python\Python37-32\lib\site-packages\requests\sessions.py", line 546, in get
    return self.request('GET', url, **kwargs)
  File "D:\Users\<USERNAME>\AppData\Local\Programs\Python\Python37-32\lib\site-packages\requests\sessions.py", line 533, in request
    resp = self.send(prep, **send_kwargs)
  File "D:\Users\<USERNAME>\AppData\Local\Programs\Python\Python37-32\lib\site-packages\requests\sessions.py", line 646, in send
    r = adapter.send(request, **kwargs)
  File "D:\Users\<USERNAME>\AppData\Local\Programs\Python\Python37-32\lib\site-packages\requests\adapters.py", line 449, in send
    timeout=timeout
  File "D:\Users\<USERNAME>\AppData\Local\Programs\Python\Python37-32\lib\site-packages\urllib3\connectionpool.py", line 603, in urlopen
    chunked=chunked)
  File "D:\Users\<USERNAME>\AppData\Local\Programs\Python\Python37-32\lib\site-packages\urllib3\connectionpool.py", line 355, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "D:\Users\<USERNAME>\AppData\Local\Programs\Python\Python37-32\lib\http\client.py", line 1244, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "D:\Users\<USERNAME>\AppData\Local\Programs\Python\Python37-32\lib\http\client.py", line 1285, in _send_request
    self.putheader(hdr, value)
  File "D:\Users\<USERNAME>\AppData\Local\Programs\Python\Python37-32\lib\http\client.py", line 1217, in putheader
    values[i] = one_value.encode('latin-1')
UnicodeEncodeError: 'latin-1' codec can't encode character '\u20ac' in position 31: ordinal not in range(256)

The issue I encounter when I try to run it as is

OSError with {text} file_name_format

I'm running into the following error with a certain file when I use the {text} option in file_name_format:

OSError: [Errno 22] Invalid argument: 'X:\OnlyFans/Users/boobzillaxxx/Images/2018-02-23-Waistraining. HIIT cardio Get access to my unseen and exclusive content at onlyfans.com onlyfans.com onlyfans.com onlyfans.com onlyfans.com onlyfans.com onlyfans.com onlyfans.com onlyfans.com onlyfans.com-33881666--upload-467305-1519404725819-JPEG-20180222-092806-349656419.jpg'

Not an issue

Do you think you will write a script to pull media from justfor.fans accounts?

CSV/JSON files

Is it possible to remove any text in angle brackets and unicode text from the "text" column when scrapping to a json/csv file?

Currently, I open the json file in Word and use the wildcard search function to delete any text starting and ending with angle brackets, replace \n with a space etc. Then convert it to a spreadsheet and make the relevant changes before creating a batch file to rename the files with relevant titles.

Basically I have to do a lot before I end up with something like this

"ren "5d9ec0518452aaf05bd2d.mp4" "[2019-10-11] 194 - title.mp4""

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.