Giter Club home page Giter Club logo

get_all_tickers's People

Contributors

djn1 avatar rikbrown avatar shilewenuw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

get_all_tickers's Issues

History tickers

Does the package retrieve tickers for stocks that are not active (they don't exists or or they are removed from the stock exchange)?

New NASDAQ fix is not in PyPi

The current code in PyPi is version 1.7 from August 2020.
It does not work, as it uses the obsolete NASDAQ api.
See PR#17, and Issues #11 and #12.
Does anyone intend to upgrade PyPi?

Request to be a maintainer

I can see the library is broken due to some outdated links.
Pull requests are not efficient for dev purposes.
If possible, please transfer the ownership or grant a written access to me.

get_tickers() throws out parse error mesage

Simply run the example code and get the parse error msg.

from get_all_tickers import get_tickers as gt
from get_all_tickers.get_tickers import Region

tickers of all exchanges

tickers = gt.get_tickers()
print(tickers[:5])


ParserError Traceback (most recent call last)
in ()
2 from get_all_tickers.get_tickers import Region
3 # tickers of all exchanges
----> 4 tickers = gt.get_tickers()
5 print(tickers[:5])

/usr/local/lib/python3.7/dist-packages/pandas/io/parsers.py in read(self, nrows)
2155 def read(self, nrows=None):
2156 try:
-> 2157 data = self._reader.read(nrows)
2158 except StopIteration:
2159 if self._first_chunk:

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.read()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_rows()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._tokenize_rows()

pandas/_libs/parsers.pyx in pandas._libs.parsers.raise_parser_error()

ParserError: Error tokenizing data. C error: Expected 1 fields in line 6, saw 47

Giving same tickers multiple times

I am using the following code to download top 500 tickers by market cap: tickers = gt.get_biggest_n_tickers(500)

It's producing same tickers multiple times. Here's the returned list:

['MSFT', 'MSFT', 'MSFT', 'AMZN', 'AMZN', 'AMZN', 'GOOG', 'GOOG', 'GOOG', 'GOOGL', 'GOOGL', 'GOOGL', 'AAPL', 'AAPL', 'AAPL', 'HD', 'HD', 'HD', 'TMO', 'TMO', 'TMO', 'PFE', 'PFE', 'PFE', 'PEP', 'PEP', 'PEP', 'AVGO', 'AVGO', 'AVGO', 'LLY', 'LLY', 'LLY', 'MRK', 'MRK', 'MRK', 'TSM', 'TSM', 'TSM', 'ORCL', 'ORCL', 'ORCL', 'ABBV', 'ABBV', 'ABBV', 'BAC', 'BAC', 'BAC', 'SHOP', 'SHOP', 'SHOP', 'BHP', 'BHP', 'BHP', 'CVX', 'CVX', 'CVX', 'FB', 'FB', 'FB', 'ACN', 'ACN', 'ACN', 'DHR', 'DHR', 'DHR', 'QCOM', 'QCOM', 'QCOM', 'NVO', 'NVO', 'NVO', 'NEE', 'NEE', 'NEE', 'TXN', 'TXN', 'TXN', 'MCD', 'MCD', 'MCD', 'MDT', 'MDT', 'MDT', 'COST', 'COST', 'COST', 'NVDA', 'NVDA', 'NVDA', 'SAP', 'SAP', 'SAP', 'TMUS', 'TMUS', 'TMUS', 'JD', 'JD', 'JD', 'HDB', 'HDB', 'HDB', 'NFLX', 'NFLX', 'NFLX', 'BBL', 'BBL', 'BBL', 'UL', 'UL', 'UL', 'UPS', 'UPS', 'UPS', 'HON', 'HON', 'HON', 'SE', 'SE', 'SE', 'DIS', 'DIS', 'DIS', 'V', 'V', 'V', 'CMCSA', 'CMCSA', 'CMCSA', 'INTC', 'INTC', 'INTC', 'SNE', 'SNE', 'SNE', 'WFC', 'WFC', 'WFC', 'UNP', 'UNP', 'UNP', 'ASML', 'ASML', 'ASML', 'JNJ', 'JNJ', 'JNJ', 'ADBE', 'ADBE', 'ADBE', 'AMGN', 'AMGN', 'AMGN', 'BMY', 'BMY', 'BMY', 'MS', 'MS', 'MS', 'LIN', 'LIN', 'LIN', 'PDD', 'PDD', 'PDD', 'PM', 'PM', 'PM', 'LOW', 'LOW', 'LOW', 'C', 'C', 'C', 'AZN', 'AZN', 'AZN', 'MA', 'MA', 'MA', 'PYPL', 'PYPL', 'PYPL', 'BUD', 'BUD', 'BUD', 'CHTR', 'CHTR', 'CHTR', 'VZ', 'VZ', 'VZ', 'JPM', 'JPM', 'JPM', 'BA', 'BA', 'BA', 'SBUX', 'SBUX', 'SBUX', 'NKE', 'NKE', 'NKE', 'BABA', 'BABA', 'BABA', 'SNY', 'SNY', 'SNY', 'ABNB', 'ABNB', 'ABNB', 'ZM', 'ZM', 'ZM', 'ABT', 'ABT', 'ABT', 'RY', 'RY', 'RY', 'CRM', 'CRM', 'CRM', 'SQ', 'SQ', 'SQ', 'PG', 'PG', 'PG', 'TM', 'TM', 'TM', 'KO', 'KO', 'KO', 'XOM', 'XOM', 'XOM', 'NOW', 'NOW', 'NOW', 'WMT', 'WMT', 'WMT', 'UNH', 'UNH', 'UNH', 'AMD', 'AMD', 'AMD', 'UBER', 'UBER', 'UBER', 'BLK', 'BLK', 'BLK', 'HSBC', 'HSBC', 'HSBC', 'TOT', 'TOT', 'TOT', 'RTX', 'RTX', 'RTX', 'D', 'D', 'D', 'EL', 'EL', 'EL', 'URI', 'URI', 'URI', 'INCY', 'INCY', 'INCY', 'TAL', 'TAL', 'TAL', 'DE', 'DE', 'DE', 'EQNR', 'EQNR', 'EQNR', 'ATVI', 'ATVI', 'ATVI', 'APTV', 'APTV', 'APTV', 'GWW', 'GWW', 'GWW', 'WMG', 'WMG', 'WMG', 'VALE', 'VALE', 'VALE', 'DHI', 'DHI', 'DHI', 'DELL', 'DELL', 'DELL', 'CM', 'CM', 'CM', 'ZG', 'ZG', 'ZG', 'K', 'K', 'K', 'CRWD', 'CRWD', 'CRWD', 'DD', 'DD', 'DD', 'GOLD', 'GOLD', 'GOLD', 'ABB', 'ABB', 'ABB', 'CTLT', 'CTLT', 'CTLT', 'HCA', 'HCA', 'HCA', 'LOGI', 'LOGI', 'LOGI', 'BMO', 'BMO', 'BMO', 'DG', 'DG', 'DG', 'AEP', 'AEP', 'AEP', 'ALNY', 'ALNY', 'ALNY', 'GSK', 'GSK', 'GSK', 'PEG', 'PEG', 'PEG', 'ZS', 'ZS', 'ZS', 'APH', 'APH', 'APH', 'PLTR', 'PLTR', 'PLTR', 'BCE', 'BCE', 'BCE', 'AWK', 'AWK', 'AWK', 'LFC', 'LFC', 'LFC', 'VOD', 'VOD', 'VOD', 'CNHI', 'CNHI', 'CNHI', 'LHX', 'LHX', 'LHX', 'LEN', 'LEN', 'LEN', 'IQ', 'IQ', 'IQ', 'HUBS', 'HUBS', 'HUBS', 'BX', 'BX', 'BX', 'BAX', 'BAX', 'BAX', 'SBAC', 'SBAC', 'SBAC', 'WLTW', 'WLTW', 'WLTW', 'PKX', 'PKX', 'PKX', 'MLM', 'MLM', 'MLM', 'SAN', 'SAN', 'SAN', 'HUM', 'HUM', 'HUM', 'RSG', 'RSG', 'RSG', 'MCK', 'MCK', 'MCK', 'BLL', 'BLL', 'BLL', 'PAGS', 'PAGS', 'PAGS', 'NVCR', 'NVCR', 'NVCR', 'QRVO', 'QRVO', 'QRVO', 'SNOW', 'SNOW', 'SNOW', 'RMD', 'RMD', 'RMD', 'HZNP', 'HZNP', 'HZNP', 'PLD', 'PLD', 'PLD', 'OKE', 'OKE', 'OKE', 'ES', 'ES', 'ES', 'BIO', 'BIO', 'BIO', 'AKAM', 'AKAM', 'AKAM', 'NWG', 'NWG', 'NWG', 'ETSY', 'ETSY', 'ETSY', 'SNN', 'SNN', 'SNN', 'IBM', 'IBM', 'IBM', 'ETR', 'ETR', 'ETR', 'DUK', 'DUK', 'DUK', 'FTS', 'FTS', 'FTS', 'CQP', 'CQP', 'CQP', 'CDNS', 'CDNS', 'CDNS', 'SYY', 'SYY', 'SYY', 'IP', 'IP', 'IP', 'WDC', 'WDC', 'WDC', 'MTD', 'MTD', 'MTD', 'DFS', 'DFS']

As you can see, it is importing 500 tickers, but same ones multiple times. Why is that happening? Am I doing something wrong?

get_tickers() method not working in production (AWS Lambda, EC2, Digital Ocean etc)

I use your library for getting tickers from exchange markets. But unfortunately get_tickers() method works only locally on my PC. In production (AWS Lambda, AWS EC2, Digital Ocean) it doesn't work and just lags. You use requests library in your method, it doesn't get response from nasdaq and keep waiting until timeout. Can you tell me what is the reason? How can I achieve working of the method in production? Thank you in advance

get_tickers_by region() not working for any Region.region

Hello,

I've tried all the values in the REGION enum in the get_tickers_by_region(). ALl of them throw this error -

Traceback (most recent call last): File "/home/master_use/PycharmProjects/gitlab_tool/main.py", line 13, in <module> tickers = gt.get_tickers_by_region(Region.SOUTH_AMERICA) File "/home/master_use/PycharmProjects/test_ticker_download/venv/lib/python3.9/site-packages/get_all_tickers/get_tickers.py", line 133, in get_tickers_by_region df = pd.read_csv(data, sep=",") File "/home/master_use/PycharmProjects/test_ticker_download/venv/lib/python3.9/site-packages/pandas/util/_decorators.py", line 311, in wrapper return func(*args, **kwargs) File "/home/master_use/PycharmProjects/test_ticker_download/venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 586, in read_csv return _read(filepath_or_buffer, kwds) File "/home/master_use/PycharmProjects/test_ticker_download/venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 488, in _read return parser.read(nrows) File "/home/master_use/PycharmProjects/test_ticker_download/venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1047, in read index, columns, col_dict = self._engine.read(nrows) File "/home/master_use/PycharmProjects/test_ticker_download/venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 223, in read chunks = self._reader.read_low_memory(nrows) File "pandas/_libs/parsers.pyx", line 801, in pandas._libs.parsers.TextReader.read_low_memory File "pandas/_libs/parsers.pyx", line 857, in pandas._libs.parsers.TextReader._read_rows File "pandas/_libs/parsers.pyx", line 843, in pandas._libs.parsers.TextReader._tokenize_rows File "pandas/_libs/parsers.pyx", line 1925, in pandas._libs.parsers.raise_parser_error pandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 5, saw 46

get_tickers_by_region gives for all regions same result?

Hi,

I use below code lines:
from get_all_tickers import get_tickers as gt
from get_all_tickers.get_tickers import Region
companies = gt.get_tickers_by_region(Region.EUROPE)

But whatever I fill in as region it always gives me the same list of tickers.

Do you know what is the issue / what do I do wrong?

Thanks!

parse Error

ParserError Traceback (most recent call last)
in
----> 1 gt.get_tickers()

D:\Anaconda3\lib\site-packages\get_all_tickers\get_tickers.py in get_tickers(NYSE, NASDAQ, AMEX)
71 tickers_list = []
72 if NYSE:
---> 73 tickers_list.extend(__exchange2list('nyse'))
74 if NASDAQ:
75 tickers_list.extend(__exchange2list('nasdaq'))

D:\Anaconda3\lib\site-packages\get_all_tickers\get_tickers.py in __exchange2list(exchange)
136
137 def __exchange2list(exchange):
--> 138 df = __exchange2df(exchange)
139 # removes weird tickers
140 df_filtered = df[~df['Symbol'].str.contains(".|^")]

D:\Anaconda3\lib\site-packages\get_all_tickers\get_tickers.py in __exchange2df(exchange)
132 response = requests.get('https://old.nasdaq.com/screening/companies-by-name.aspx', headers=headers, params=params(exchange))
133 data = io.StringIO(response.text)
--> 134 df = pd.read_csv(data, sep=",")
135 return df
136

D:\Anaconda3\lib\site-packages\pandas\io\parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision)
683 )
684
--> 685 return _read(filepath_or_buffer, kwds)
686
687 parser_f.name = name

D:\Anaconda3\lib\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds)
461
462 try:
--> 463 data = parser.read(nrows)
464 finally:
465 parser.close()

D:\Anaconda3\lib\site-packages\pandas\io\parsers.py in read(self, nrows)
1152 def read(self, nrows=None):
1153 nrows = _validate_integer("nrows", nrows)
-> 1154 ret = self._engine.read(nrows)
1155
1156 # May alter columns / col_dict

D:\Anaconda3\lib\site-packages\pandas\io\parsers.py in read(self, nrows)
2057 def read(self, nrows=None):
2058 try:
-> 2059 data = self._reader.read(nrows)
2060 except StopIteration:
2061 if self._first_chunk:

pandas_libs\parsers.pyx in pandas._libs.parsers.TextReader.read()

pandas_libs\parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory()

pandas_libs\parsers.pyx in pandas._libs.parsers.TextReader._read_rows()

pandas_libs\parsers.pyx in pandas._libs.parsers.TextReader._tokenize_rows()

pandas_libs\parsers.pyx in pandas._libs.parsers.raise_parser_error()

ParserError: Error tokenizing data. C error: Expected 1 fields in line 6, saw 48

get_tickers.py returns tickers from nasdaq only not other exchanges

Contrary to what is claimed in the Readme, get_tickers_filtered only returns tickers from Nasdaq not other exchanges like NYSE.

def __exchange2df(exchange):
    r = requests.get('https://api.nasdaq.com/api/screener/stocks', headers=headers, params=params)
    data = r.json()['data']
    df = pd.DataFrame(data['rows'], columns=data['headers'])
    return df

ParserError: Error tokenizing data. C error: Expected 1 fields in line 23, saw 46

When I try:

from get_all_tickers import get_tickers as gt

tickers = gt.get_tickers()

I get an error:


tickers = gt.get_tickers(NASDAQ=False)
---------------------------------------------------------------------------
ParserError                               Traceback (most recent call last)
c:\Users\Mislav\Documents\GitHub\stocksee\stocksee\ib_market_data.py in 
----> 36 tickers = gt.get_tickers(NASDAQ=False)

C:\ProgramData\Anaconda3\lib\site-packages\get_all_tickers\get_tickers.py in get_tickers(NYSE, NASDAQ, AMEX)
     71     tickers_list = []
     72     if NYSE:
---> 73         tickers_list.extend(__exchange2list('nyse'))
     74     if NASDAQ:
     75         tickers_list.extend(__exchange2list('nasdaq'))

C:\ProgramData\Anaconda3\lib\site-packages\get_all_tickers\get_tickers.py in __exchange2list(exchange)
    136 
    137 def __exchange2list(exchange):
--> 138     df = __exchange2df(exchange)
    139     # removes weird tickers
    140     df_filtered = df[~df['Symbol'].str.contains("\.|\^")]

C:\ProgramData\Anaconda3\lib\site-packages\get_all_tickers\get_tickers.py in __exchange2df(exchange)
    132     response = requests.get('https://old.nasdaq.com/screening/companies-by-name.aspx', headers=headers, params=params(exchange))
    133     data = io.StringIO(response.text)
--> 134     df = pd.read_csv(data, sep=",")
    135     return df
    136 

~\AppData\Roaming\Python\Python38\site-packages\pandas\io\parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision)
    674         )
    675 
--> 676         return _read(filepath_or_buffer, kwds)
    677 
    678     parser_f.__name__ = name

~\AppData\Roaming\Python\Python38\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds)
    452 
    453     try:
--> 454         data = parser.read(nrows)
    455     finally:
    456         parser.close()

~\AppData\Roaming\Python\Python38\site-packages\pandas\io\parsers.py in read(self, nrows)
   1131     def read(self, nrows=None):
   1132         nrows = _validate_integer("nrows", nrows)
-> 1133         ret = self._engine.read(nrows)
   1134 
   1135         # May alter columns / col_dict

~\AppData\Roaming\Python\Python38\site-packages\pandas\io\parsers.py in read(self, nrows)
   2035     def read(self, nrows=None):
   2036         try:
-> 2037             data = self._reader.read(nrows)
   2038         except StopIteration:
   2039             if self._first_chunk:

pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader.read()

pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory()

pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._read_rows()

pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._tokenize_rows()

pandas\_libs\parsers.pyx in pandas._libs.parsers.raise_parser_error()

ParserError: Error tokenizing data. C error: Expected 1 fields in line 23, saw 46

Documentation needs updating.

Within the documetnation, shouldn't tickers = get_tickers() be written as tickers = gt.get_tickers() throughout the examples.

Running get_biggest_n_tickers results in HTTPSConnectionPoolError

When I try to run the biggest ticker results action in the following way:

top_5 = gt.get_biggest_n_tickers(5) print(top_5)

I end up receiving the following error message:

ConnectionError: HTTPSConnectionPool(host='old.nasdaq.com', port=443): Max retries exceeded with url: /screening/companies-by-name.aspx?letter=0&exchange=nyse&render=download (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f23ced55a90>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution'))

Any idea what this might be related to? Addind a sector doesn't seem to change anything.

TypeError: 'module' object is not callable

This is the error im running into when I am trying to run the example code.

TypeError Traceback (most recent call last)
in
----> 1 ticker = gt()

TypeError: 'module' object is not callable

Here's my code
////

import pandas as pd
from get_all_tickers import get_tickers as gt
from get_all_tickers.get_tickers import Region
ticker = gt()

Things of notes:
I'm running python via miniconda. This is the first time I work with packages not in the conda library.

Not returning tickers.

Great job with the library. I have been using it for the last two weeks, but for some reason yesterday it stopped working.
The example you gave is not working:

from get_all_tickers import get_tickers as gt
tickers = gt.get_tickers(NYSE=True, NASDAQ=True, AMEX=True)

Please let me know if this is just an issue on my side, or if other are having this issue as well.

get_tickers() not working properly

When running get_tickers() the length of the returned list is 19959.
When you run set() on the returned list the new length is 6653.
Also, for each of the Exchanges (AMEX, NYSE, NASDAQ) the same list is returned with a list of 6653.
6653*3 = 19959 so I think that the same tickers are being repeated over and over.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.