Giter Club home page Giter Club logo

areed1192 / sigma_coding_youtube Goto Github PK

View Code? Open in Web Editor NEW
1.1K 102.0 800.0 69.43 MB

This is a collection of all the code that can be found on my YouTube channel Sigma Coding.

Home Page: https://sigma-coding.com/

License: MIT License

Python 2.13% Jupyter Notebook 46.85% TSQL 0.51% TypeScript 0.67% Shell 0.01% VBA 5.13% Visual Basic .NET 0.59% HTML 42.85% Batchfile 0.01% VBScript 0.14% TeX 1.12%
vba-excel powerpoint-vba word-vba vba python google-maps-api yelp-fusion-api outlook-vba python-windows win32

sigma_coding_youtube's Issues

Endpoint for Positions

On the TDAmeritrade developer page documentation its referred to as the accounts endpoint. I can't seem to find where to access it in your library. Any help is appreciated. Thanks!

403 Forbidden

Describe the bug

Need user agent as explained in jadchaar/sec-edgar-downloader#77.

headers = {"User-Agent": "Company Name [email protected]"}
response = requests.get(TEXT_URL, headers=headers)

if response.status_code == 200:
    content_html = response.content.decode("utf-8") 
else:
    print(f"HTML from {TEXT_URL} failed with status {response.status_code}")

soup = BeautifulSoup(response.content, 'lxml')

Web Scraping SEC - EDGAR Queries.ipynb

Hi

this part of the code triggers error ==> IndexError: list index out of range

  • Web Scraping SEC - EDGAR Queries.ipynb
  • Section Two: Parse the Response for the Document Details
    -In [63]:

filing_date = cols[3].text.strip()
filing_numb = cols[4].text.strip()

does this happen for anyone else as well?

thx and amazing job!!!

'NoneType' object has no attribute 'find_all'

Describe the bug


AttributeError Traceback (most recent call last)
in
13
14 # loop through each report in the 'myreports' tag but avoid the last one as this will cause an error.
---> 15 for report in reports.find_all('report')[:-1]:
16
17 # let's create a dictionary to store all the different parts we need.

AttributeError: 'NoneType' object has no attribute 'find_all'
Expected behavior
Returns report dictionary
Side Note
Also generally, when I run the scraper in Jupyter Notebook it is very buggy, and I have to run the Grab the Filing XML Summary block multiple times. Do you think this could be due to the SEC throttling our requests?

Error in SEC Scraper

Describe the bug
Encounter an error when grrabbing the Filing XML Summary (Referred to as "Second Block")

To Reproduce
Steps to reproduce the behavior:

  1. First Block

import our libraries

import requests
import pandas as pd
from bs4 import BeautifulSoup

  1. Second Block

define the base url needed to create the file url.

base_url = r"https://www.sec.gov"

convert a normal url to a document url

normal_url = r"https://www.sec.gov/Archives/edgar/data/106040/000010604020000024/0000106040-20-000024.txt"
normal_url = normal_url.replace('-','').replace('.txt','/index.json')

define a url that leads to a 10k document landing page

documents_url = r"https://www.sec.gov/Archives/edgar/data/106040/000010604020000024/index.json"

request the url and decode it.

content = requests.get(documents_url).json()

for file in content['directory']['item']:

# Grab the filing summary and create a new url leading to the file so we can download it.
if file['name'] == 'FilingSummary.xml':

    xml_summary = base_url + content['directory']['name'] + "/" + file['name']
    
    print('-' * 100)
    print('File Name: ' + file['name'])
    print('File Path: ' + xml_summary)
  1. See error

JSONDecodeError Traceback (most recent call last)
in
10
11 # request the url and decode it.
---> 12 content = requests.get(documents_url).json()
13
14 for file in content['directory']['item']:

C:\ProgramData\Miniconda2\envs\tensorflow\lib\site-packages\requests\models.py in json(self, **kwargs)
898 # used.
899 pass
--> 900 return complexjson.loads(self.text, **kwargs)
901
902 @Property

C:\ProgramData\Miniconda2\envs\tensorflow\lib\json_init_.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
352 parse_int is None and parse_float is None and
353 parse_constant is None and object_pairs_hook is None and not kw):
--> 354 return _default_decoder.decode(s)
355 if cls is None:
356 cls = JSONDecoder

C:\ProgramData\Miniconda2\envs\tensorflow\lib\json\decoder.py in decode(self, s, _w)
337
338 """
--> 339 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
340 end = _w(s, end).end()
341 if end != len(s):

C:\ProgramData\Miniconda2\envs\tensorflow\lib\json\decoder.py in raw_decode(self, s, idx)
355 obj, end = self.scan_once(s, idx)
356 except StopIteration as err:
--> 357 raise JSONDecodeError("Expecting value", s, err.value) from None
358 return obj, end

JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Expected behavior

File Name: FilingSummary.xml
File Path: https://www.sec.gov/Archives/edgar/data/106040/000010604020000024/FilingSummary.xml

Screenshots
Not Applicable.

Additional context
For context, its a 50/50 if it works. Sometimes when I run it, it sucssesfully returns the File name and File Path, other times I get the JSON Decode error and have to restart kernel and run it all again. By the way, I am a big fan. Are you working on any projects recently?

is file_htm an xml or htm file?

I think the code mistakenly tried to parse an HTML file. here are a few lines from the raw code:

# Define the file path.
file_htm = sec_directory.joinpath('fb-09302019x10q.htm').resolve()
file_cal = sec_directory.joinpath('fb-20190930_cal.xml').resolve()
file_lab = sec_directory.joinpath('fb-20190930_lab.xml').resolve()
file_def = sec_directory.joinpath('fb-20190930_def.xml').resolve()

The first file is the path for an HTML file, but I think the parser is configured for XML file. Perhaps that is why the code gives me the full structure in the CSV file, but no values!

Not able to scrap page contexts in loop

I need to scrap contexts of around 250 10-K filings of 2019. When I run the code while looping through 250 url list, Its working for only first url for next ones it is throwing Find Attribute errors with description method.
Any help would be appreciated!!

TD Standard API.Py : issue with refresh token

Alex:

thank you for the great tutorial about how to access TDAmeritrade accounts in youtube via API. I have a question, from a code I have been playing with.

The authentication returns 2 tokens, one that expires in 30 minutes and the other one that expires in 3 months (the refresh_token).

Your code shows how to access the functionality by using the access token. I saved my access token and refresh token in an environment file.

After 30 minutes, the access token expires..how do I acess the functionality with the refresh_token?

Do I have to modify this?
headers = {'Authorization': "Bearer {}".format(access_token)}

The documentation states this about refresh_tokens:
To request a new access token, make a Post Access Token request with your refresh token using the following parameter values:

grant_type: refresh_token
refresh_token: {REFRESH TOKEN}
client_id: {Consumer Key}

Not sure how to implement this,
Thank you

Can not run the tdameriotrade code with error

I installed ChromeDriver 81.0.4044.69 and I am using chrome 81.0.4044.92 (Official Build) (64-bit)
I also installed splinter 13.0 and add chromedriver to the environment path. I can find chromedriver in cmd mode.

But when I run the code, I got an error
Traceback (most recent call last):
File "C:/Users/XXX/AppData/Roaming/JetBrains/PyCharm2020.1/scratches/Test.py", line 16, in
browser = Browser('chrome', **executable_path, headless=False)
File "C:\Users\XXX\AppData\Roaming\Python\Python36\site-packages\splinter\browser.py", line 90, in Browser
return get_driver(driver, *args, **kwargs)
File "C:\Users\XXX\AppData\Roaming\Python\Python36\site-packages\splinter\browser.py", line 68, in get_driver
raise e
UnboundLocalError: local variable 'e' referenced before assignment

Do you know what I am missing?

Task exception was never retreived for Data Streaming.

HI Alex,

I run your code as you have written it.
When I run the code at the top I always see a print message " Connection established. Client correctly connected"
When I run the code I might see either Response for LevelOne_futures_options, Active_Nasdaq, Quote, or all of them combined. Then followed by a statement:
"Connection with server closed"
"Task exception was never retrieved"
Then followed by an error message, RuntimeError: cannot call recv while another coroutine is already waiting for the next message.
I have attached the screenshots of the error messages.

Other times the code works continuously as intended without interrupting.

Would you know what this issue can be attributed to, whether it has something to do with my internet connection, bug in the server or a bug in the websockets and possibly if you know how to get around it.

Screenshots
Screen Shot 2020-12-08 at 6 50 22 PM
Screen Shot 2020-12-08 at 6 49 17 PM
Screen Shot 2020-12-08 at 6 49 50 PM

  • OS: [ MacOs]
  • Browser [chrome]
  • Version [e.g. 22]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.