Giter Club home page Giter Club logo

areed1192 / sigma_coding_youtube Goto Github PK

View Code? Open in Web Editor NEW
1.0K 103.0 797.0 69.43 MB

This is a collection of all the code that can be found on my YouTube channel Sigma Coding.

Home Page: https://sigma-coding.com/

License: MIT License

Python 2.13% Jupyter Notebook 46.85% TSQL 0.51% TypeScript 0.67% Shell 0.01% VBA 5.13% Visual Basic .NET 0.59% HTML 42.85% Batchfile 0.01% VBScript 0.14% TeX 1.12%
vba-excel powerpoint-vba word-vba vba python google-maps-api yelp-fusion-api outlook-vba python-windows win32

sigma_coding_youtube's Introduction

Sigma Coding

Tutorials & Resources

YouTubeFacebook

Support Sigma Coding

PatreonGitHub SponsorShop Amazon

Table of Contents

Overview

Howdy! My name is Alex, and if you're like me, you enjoy the world of programming. Or maybe you were like me a few years ago and are beginning to take your first step into this exciting world. The GitHub repository you're currently contains almost all of the code you find on my YouTube channel Sigma Coding. Feel free to clone, download or branch this repository so you can leverage the code I share in my channel.

Because I cover so many different langages on my YouTube channel, I dedicate a folder to each specific lanaguge. Right now, I cover the following lanagues on my channel:

This list is continuously changing, and I do my best to make tutorials engaging, exciting, and most importantly, easy to follow!

Topics

Now, I cover a lot of topics on my channel and as much I would like to list them all I don't want to overload with you a bunch of information. Here is a list of some of my more popular topics:

  • Python:

    • Win32COM The Win32COM library allows us to control the VBA object model from Python.
    • TD Ameritrade API The TD Ameritrade API allows us to stream real-time quote data and execute trades from Python.
    • Interactive Brokers API The TD Ameritrade API allows us to stream real-time quote data and execute trades from Python.
    • Machine Learning I cover different machine learning models ranging from regression to classification.
    • Pythonnet Pythonnet is used to connect Python to something called the CLR (Common Language Runtime) which gives us access to more Windows speicific libraries.
  • VBA:

    • Access VBA In Access, we can store large amounts of data. With Access, we will see how to create tables, query existing tables, and even importing and exporting data to and from access.
    • Excel VBA In Excel, we do an awful lot even working with non-standard libraries like ADODB.
    • Outlook VBA In Outlook, we work with email objects and account information.
    • PowerPoint VBA This series covers interacting with PowerPoint objects using VBA, topics like linking OLE objects and formatting slides.
    • Publisher VBA In Publisher, we explore how to create fliers and other media documents for advertising.
    • Word VBA With Word VBA, we see how to manipulate different documents and change the underlying format in them.
  • JavaScript:

    • Office API Learn how to use the new JavaScript API for Microsoft Office.
    • Excel API The Excel API focuses just on the API for Microsoft Excel and the object model associated with it.
    • Word API The Word API focuses just on the API for Microsoft Word and the object model associated with it.
  • TSQL:

    • APIs Learn how to make API request from Microsoft SQL Server.
    • Excel Learn how to work with Excel Workbooks using T-SQL.

Resources

If you ever have a question, would like to suggest a topic, found a mistake or just want some input for a project you can always email me at [email protected]. Additionally, you can find dedicated folders in the repository for resources like documentation.

Links To Other Respositories

Some of my projects are so large that they have dedicated repositories for them. Here is a list of repositiories of my other repositiories:

Support the Channel

Patreon: If you like what you see! Then Help support the channel and future projects by donating to my Patreon Page. I'm always looking to add more content for individuals like yourself, unfortuantely some of the APIs I would require me to pay monthly fees.

Hire Me: If you have a project, you think I can help you with feel free to reach out at [email protected] or fill out the contract request form

Disclosures: I am a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com. Full Disclosure: I will earn a commission if you purchase from the Shop Amazon link, more details are provided below.

sigma_coding_youtube's People

Contributors

areed1192 avatar jasonjcq avatar sfbowen4 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sigma_coding_youtube's Issues

'NoneType' object has no attribute 'find_all'

Describe the bug


AttributeError Traceback (most recent call last)
in
13
14 # loop through each report in the 'myreports' tag but avoid the last one as this will cause an error.
---> 15 for report in reports.find_all('report')[:-1]:
16
17 # let's create a dictionary to store all the different parts we need.

AttributeError: 'NoneType' object has no attribute 'find_all'
Expected behavior
Returns report dictionary
Side Note
Also generally, when I run the scraper in Jupyter Notebook it is very buggy, and I have to run the Grab the Filing XML Summary block multiple times. Do you think this could be due to the SEC throttling our requests?

Error in SEC Scraper

Describe the bug
Encounter an error when grrabbing the Filing XML Summary (Referred to as "Second Block")

To Reproduce
Steps to reproduce the behavior:

  1. First Block

import our libraries

import requests
import pandas as pd
from bs4 import BeautifulSoup

  1. Second Block

define the base url needed to create the file url.

base_url = r"https://www.sec.gov"

convert a normal url to a document url

normal_url = r"https://www.sec.gov/Archives/edgar/data/106040/000010604020000024/0000106040-20-000024.txt"
normal_url = normal_url.replace('-','').replace('.txt','/index.json')

define a url that leads to a 10k document landing page

documents_url = r"https://www.sec.gov/Archives/edgar/data/106040/000010604020000024/index.json"

request the url and decode it.

content = requests.get(documents_url).json()

for file in content['directory']['item']:

# Grab the filing summary and create a new url leading to the file so we can download it.
if file['name'] == 'FilingSummary.xml':

    xml_summary = base_url + content['directory']['name'] + "/" + file['name']
    
    print('-' * 100)
    print('File Name: ' + file['name'])
    print('File Path: ' + xml_summary)
  1. See error

JSONDecodeError Traceback (most recent call last)
in
10
11 # request the url and decode it.
---> 12 content = requests.get(documents_url).json()
13
14 for file in content['directory']['item']:

C:\ProgramData\Miniconda2\envs\tensorflow\lib\site-packages\requests\models.py in json(self, **kwargs)
898 # used.
899 pass
--> 900 return complexjson.loads(self.text, **kwargs)
901
902 @Property

C:\ProgramData\Miniconda2\envs\tensorflow\lib\json_init_.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
352 parse_int is None and parse_float is None and
353 parse_constant is None and object_pairs_hook is None and not kw):
--> 354 return _default_decoder.decode(s)
355 if cls is None:
356 cls = JSONDecoder

C:\ProgramData\Miniconda2\envs\tensorflow\lib\json\decoder.py in decode(self, s, _w)
337
338 """
--> 339 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
340 end = _w(s, end).end()
341 if end != len(s):

C:\ProgramData\Miniconda2\envs\tensorflow\lib\json\decoder.py in raw_decode(self, s, idx)
355 obj, end = self.scan_once(s, idx)
356 except StopIteration as err:
--> 357 raise JSONDecodeError("Expecting value", s, err.value) from None
358 return obj, end

JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Expected behavior

File Name: FilingSummary.xml
File Path: https://www.sec.gov/Archives/edgar/data/106040/000010604020000024/FilingSummary.xml

Screenshots
Not Applicable.

Additional context
For context, its a 50/50 if it works. Sometimes when I run it, it sucssesfully returns the File name and File Path, other times I get the JSON Decode error and have to restart kernel and run it all again. By the way, I am a big fan. Are you working on any projects recently?

Web Scraping SEC - EDGAR Queries.ipynb

Hi

this part of the code triggers error ==> IndexError: list index out of range

  • Web Scraping SEC - EDGAR Queries.ipynb
  • Section Two: Parse the Response for the Document Details
    -In [63]:

filing_date = cols[3].text.strip()
filing_numb = cols[4].text.strip()

does this happen for anyone else as well?

thx and amazing job!!!

Not able to scrap page contexts in loop

I need to scrap contexts of around 250 10-K filings of 2019. When I run the code while looping through 250 url list, Its working for only first url for next ones it is throwing Find Attribute errors with description method.
Any help would be appreciated!!

Can not run the tdameriotrade code with error

I installed ChromeDriver 81.0.4044.69 and I am using chrome 81.0.4044.92 (Official Build) (64-bit)
I also installed splinter 13.0 and add chromedriver to the environment path. I can find chromedriver in cmd mode.

But when I run the code, I got an error
Traceback (most recent call last):
File "C:/Users/XXX/AppData/Roaming/JetBrains/PyCharm2020.1/scratches/Test.py", line 16, in
browser = Browser('chrome', **executable_path, headless=False)
File "C:\Users\XXX\AppData\Roaming\Python\Python36\site-packages\splinter\browser.py", line 90, in Browser
return get_driver(driver, *args, **kwargs)
File "C:\Users\XXX\AppData\Roaming\Python\Python36\site-packages\splinter\browser.py", line 68, in get_driver
raise e
UnboundLocalError: local variable 'e' referenced before assignment

Do you know what I am missing?

TD Standard API.Py : issue with refresh token

Alex:

thank you for the great tutorial about how to access TDAmeritrade accounts in youtube via API. I have a question, from a code I have been playing with.

The authentication returns 2 tokens, one that expires in 30 minutes and the other one that expires in 3 months (the refresh_token).

Your code shows how to access the functionality by using the access token. I saved my access token and refresh token in an environment file.

After 30 minutes, the access token expires..how do I acess the functionality with the refresh_token?

Do I have to modify this?
headers = {'Authorization': "Bearer {}".format(access_token)}

The documentation states this about refresh_tokens:
To request a new access token, make a Post Access Token request with your refresh token using the following parameter values:

grant_type: refresh_token
refresh_token: {REFRESH TOKEN}
client_id: {Consumer Key}

Not sure how to implement this,
Thank you

Task exception was never retreived for Data Streaming.

HI Alex,

I run your code as you have written it.
When I run the code at the top I always see a print message " Connection established. Client correctly connected"
When I run the code I might see either Response for LevelOne_futures_options, Active_Nasdaq, Quote, or all of them combined. Then followed by a statement:
"Connection with server closed"
"Task exception was never retrieved"
Then followed by an error message, RuntimeError: cannot call recv while another coroutine is already waiting for the next message.
I have attached the screenshots of the error messages.

Other times the code works continuously as intended without interrupting.

Would you know what this issue can be attributed to, whether it has something to do with my internet connection, bug in the server or a bug in the websockets and possibly if you know how to get around it.

Screenshots
Screen Shot 2020-12-08 at 6 50 22 PM
Screen Shot 2020-12-08 at 6 49 17 PM
Screen Shot 2020-12-08 at 6 49 50 PM

  • OS: [ MacOs]
  • Browser [chrome]
  • Version [e.g. 22]

is file_htm an xml or htm file?

I think the code mistakenly tried to parse an HTML file. here are a few lines from the raw code:

# Define the file path.
file_htm = sec_directory.joinpath('fb-09302019x10q.htm').resolve()
file_cal = sec_directory.joinpath('fb-20190930_cal.xml').resolve()
file_lab = sec_directory.joinpath('fb-20190930_lab.xml').resolve()
file_def = sec_directory.joinpath('fb-20190930_def.xml').resolve()

The first file is the path for an HTML file, but I think the parser is configured for XML file. Perhaps that is why the code gives me the full structure in the CSV file, but no values!

Endpoint for Positions

On the TDAmeritrade developer page documentation its referred to as the accounts endpoint. I can't seem to find where to access it in your library. Any help is appreciated. Thanks!

403 Forbidden

Describe the bug

Need user agent as explained in jadchaar/sec-edgar-downloader#77.

headers = {"User-Agent": "Company Name [email protected]"}
response = requests.get(TEXT_URL, headers=headers)

if response.status_code == 200:
    content_html = response.content.decode("utf-8") 
else:
    print(f"HTML from {TEXT_URL} failed with status {response.status_code}")

soup = BeautifulSoup(response.content, 'lxml')

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.