Giter Club home page Giter Club logo

og-zaf's Introduction

OG-ZAF

Org United Nations DESA PSL cataloged OS License: CC0-1.0
Package Python 3.10 Python 3.11 PyPI Latest Release PyPI Downloads
Testing example event parameter example event parameter example event parameter Codecov

OG-ZAF is an overlapping-generations (OG) model that allows for dynamic general equilibrium analysis of fiscal policy for South Africa. OG-ZAF is built on the OG-Core framework. The model output includes changes in macroeconomic aggregates (GDP, investment, consumption), wages, interest rates, and the stream of tax revenues over time. Regularly updated documentation of the model theory--its output, and solution method--and the Python API is available at https://pslmodels.github.io/OG-Core and documentation of the specific South African calibration of the model will be available soon.

Using and contributing to OG-ZAF

  • If you are installing on a Mac computer, install XCode Tools. In Terminal: xcode-select —install
  • Download and install the appropriate Anaconda distribution of Python. Select the correct version for you platform (Windows, Intel Mac, or M1 Mac).
  • In Terminal:
    • Make sure the conda package manager is up-to-date: conda update conda.
    • Make sure the Anaconda distribution of Python is up-to-date: conda update anaconda.
  • Fork this repository and clone your fork of this repository to a directory on your computer.
  • From the terminal (or Anaconda command prompt), navigate to the directory to which you cloned this repository and run conda env create -f environment.yml. The process of creating the ogzaf-dev conda environment should not take more than five minutes.
  • Then, conda activate ogzaf-dev
  • Then install by pip install -e .

Run an example of the model

  • Navigate to ./examples
  • Run the model with an example reform from terminal/command prompt by typing python run_og_zaf.py
  • You can adjust the ./examples/run_og_zaf.py by modifying model parameters specified in the dictionary passed to the p.update_specifications() calls.
  • Model outputs will be saved in the following files:
    • ./examples/OG-ZAF_example_plots
      • This folder will contain a number of plots generated from OG-Core to help you visualize the output from your run
    • ./examples/ogzaf_example_output.csv
      • This is a summary of the percentage changes in macro variables over the first ten years and in the steady-state.
    • ./examples/OG-ZAF-Example/OUTPUT_BASELINE/model_params.pkl
      • Model parameters used in the baseline run
      • See ogcore.execute.py for items in the dictionary object in this pickle file
    • ./examples/OG-ZAF-Example/OUTPUT_BASELINE/SS/SS_vars.pkl
      • Outputs from the model steady state solution under the baseline policy
      • See ogcore.SS.py for what is in the dictionary object in this pickle file
    • ./examples/OG-ZAF-Example/OUTPUT_BASELINE/TPI/TPI_vars.pkl
      • Outputs from the model timepath solution under the baseline policy
      • See ogcore.TPI.py for what is in the dictionary object in this pickle file
    • An analogous set of files in the ./examples/OUTPUT_REFORM directory, which represent objects from the simulation of the reform policy

Note that, depending on your machine, a full model run (solving for the full time path equilibrium for the baseline and reform policies) can take from 35 minutes to more than two hours of compute time.

If you run into errors running the example script, please open a new issue in the OG-ZAF repo with a description of the issue and any relevant tracebacks you receive.

Once the package is installed, one can adjust parameters in the OG-Core Specifications object using the Calibration class as follows:

from ogcore.parameters import Specifications
from ogzaf.calibrate import Calibration
p = Specifications()
c = Calibration(p)
updated_params = c.get_dict()
p.update_specifications({'initial_debt_ratio': updated_params['initial_debt_ratio']})

Disclaimer

The organization of this repository will be changing rapidly, but the OG-ZAF/examples/run_og_zaf.py script will be kept up to date to run with the master branch of this repo.

Core Maintainers

The core maintainers of the OG-ZAF repository are:

  • Marcelo LaFleur (GitHub handle: @SeaCelo), Senior Economist, Department of Economic and Social Affairs (DESA), United Nations
  • Richard W. Evans (GitHub handle: @rickecon), Senior Research Fellow and Director of Open Policy, Center for Growth and Opportunity at Utah State University; President, Open Research Group, Inc.
  • Jason DeBacker (GitHub handle: @jdebacker), Associate Professor, University of South Carolina; Vice President of Research, Open Research Group, Inc.

Citing OG-ZAF

OG-ZAF (Version #.#.#)[Source code], https://github.com/EAPD-DRB/OG-ZAF

og-zaf's People

Contributors

jdebacker avatar nhgit213 avatar nyeinchan85 avatar rickecon avatar sarvonz avatar seacelo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

og-zaf's Issues

Macro parameters updates to make

macro_params.py currently pulls macroeconomic data from several sources. These include the World Bank World Development Indicators, FRED, the UN (currently unused due to changes in their API).

Despite this, we still have trouble finding values for $\alpha_T$ (total government transfers as a share of GDP) and $\alpha_G$ (government consumption expenditures).

We should clean up this file and see if we can put values to $\alpha_T$ and $\alpha_G$ with data from one of these databases.

Source corporate yield data

The only sources of corporate yield data are paid sources. We cannot guarantee access to this. We do have access to sovereign yield data from public sources.

The current code in macro_params.py estimates the relationship between sovereign and corporate yields using US data (from FRED) and using a linear OLS model.

A recent paper estimated the relationship between sovereign and corporate yields. I propose to use their estimates as the parameter, rather than the linear estimates computed in this part of the code (which are US only).
For EMEs:
Corporate Yield = 1.41(Sy) + −0.046(Sy)^2 , where Sy=sovereign nominal yields

From Table 1 (eq 3) of this paper: Corporate yields and sovereign yields

To implement this we must first add code to get country-specific sovereign yield and then compute the corporate yield using this relationship

Some South African references for OG-ZAF calibration to consider & Data automation opportunity

This is a great initiative! Just a couple of comments.

Calibration

For anyone that would like to enrich the calibrations used for SA, this paper is a useful reference to use for thinking about the SA calibration:
https://link.springer.com/chapter/10.1007/978-3-030-35754-2_7 , see Tables 1-4 for comparisons across models estimated in SA.

Specific Research Questions, eg Basic Income Grants

We have a paper on basic income grants that you could look at as well: https://onlinelibrary.wiley.com/doi/full/10.1111/saje.12363

Data Automation

One could use the public domain data from our platform Econdata.co.za using its R/Python package to automate access of some of the data concepts used.

Extra Modules

One can for example, industry output data is available from EconData, so users could embellish the model with new modules that extend the number of industries in the model etc.

Model Automation

If a research student is interested in working with us to create a web app that automates analysis to make it easy for students to use without knowing to code, please get in touch with us at Codera

Compute lifetime earnings profiles

Preparing the lifetime earnings profiles for ZAF using data from the Luxembourg Income Study (LIS), available in their LISSY portal.

For South Africa, the LIS has a survey from 2017 which includes total wages as well as hours worked for each individual. Their description of the data is available in their METIS site.

The strategy is to use wages/hour as a measure of productivity, and reporting it by age (s) and by income group (j).

The computation generally follows the procedure described in the OG-USA calibration of lifetime earnings profiles.

  1. Download wage/hour in SxJ dimensions
  2. Normalize them by the mean of each J income group
  3. Take natural log
  4. Fill in missing values by regressing ln(productivity) on age, age^2 by each J income group

This is all done with Stata code in the LISSY interface, and manually formatted into a simple CSV with SxJ dimensions.

I'm attaching the results:
productivity_modelled.csv

The .do file is attached (as a zip). lissy2.do.zip

@rickecon Let me know if you think this calculation is reasonable. In the US calibration you do more to compute present value and your range looks different, so we would probably have to adjust this further.

Use UN Population forecasts in demographics.py

The current demographics.py code only takes recent data and generates its own series to compute the steady state and projections. It would be desirable to use the UN models for population, fertility, mortality, and immigration.

UN Population publishes forecasts through 2100. UN Population forecasts are now available in 1-year increments and by age throughout the forecast period: World Population Prospects 2022.

UN Data API (implemented in #4).

WB API failures

Recently, the World Bank Development Indicators API has started to yield errors. As a temporary solution, we commented out code accessing the WB API.

We should consider longer run solutions for dealing with errors with any API. Perhaps a try and except with a warning the API wasn't accessible.

Update installation instructions for ARM Macs (M1 and M2)

We've tested the process of installing OG-ZAF on a "clean" Mac with the M1 chip. This computer had no previous installation of python, Anaconda, or dependencies. We tried to follow the installation instructions as written in the Read Me. During the installation, we ran into a few problems:

  • Late in the process an error required that users install Xcode Tools. Users should do this as a first step: xcode-select —install
  • Users should download the correct version of Anaconda and Python. The prominent link in the Anaconda page downloads the X86 version. Instead, users should go to the bottom of the installation page: Go here. Then select the version for the M1 (ARM) instead of x86 (Intel).
  • With Anaconda for M1 chips installed, you will get an error when you try conda env create -f environment.yml. This is because the "MKL" library will not be found (it is an Intel-only library). To fix this you must follow the instructions in this comment.

There are significant performance gains to using the appropriate version of Anaconda and Python on ARM Macs. The speedup in tests is 1.6-1.7x.

The installation instructions should reflect the platform-specific steps. I don't think the installation scripts can be written to take platform-specific steps, but this would be the preferred solution as it would simplify the process for users.

Index out of range error when running the model

The model fails to run with a "list index out of range" error. Thanks to @jdebacker for noticing it while testing a different issue.

Trace below:

Number of workers =  7
making dir:  /Users/mlafleur/Projects/OG-ZAF/examples/OG-ZAF-Example/OUTPUT_BASELINE/SS
making dir:  /Users/mlafleur/Projects/OG-ZAF/examples/OG-ZAF-Example/OUTPUT_BASELINE/TPI
In runner, baseline is  True
SS using initial guess factors for r and TR of 1.0 and 1.0 , respectively.
Traceback (most recent call last):
  File "/Users/mlafleur/Projects/OG-ZAF/examples/run_og_zaf.py", line 127, in <module>
    main()
  File "/Users/mlafleur/Projects/OG-ZAF/examples/run_og_zaf.py", line 65, in main
    runner(p, time_path=True, client=client)
  File "/Users/mlafleur/opt/anaconda3/envs/ogzaf-dev-test/lib/python3.9/site-packages/ogcore/execute.py", line 45, in runner
    ss_outputs = SS.run_SS(p, client=client)
  File "/Users/mlafleur/opt/anaconda3/envs/ogzaf-dev-test/lib/python3.9/site-packages/ogcore/SS.py", line 1215, in run_SS
    sol = opt.root(
  File "/Users/mlafleur/opt/anaconda3/envs/ogzaf-dev-test/lib/python3.9/site-packages/scipy/optimize/_root.py", line 235, in root
    sol = _root_hybr(fun, x0, args=args, jac=jac, **options)
  File "/Users/mlafleur/opt/anaconda3/envs/ogzaf-dev-test/lib/python3.9/site-packages/scipy/optimize/_minpack_py.py", line 229, in _root_hybr
    shape, dtype = _check_func('fsolve', 'func', func, x0, args, n, (n,))
  File "/Users/mlafleur/opt/anaconda3/envs/ogzaf-dev-test/lib/python3.9/site-packages/scipy/optimize/_minpack_py.py", line 26, in _check_func
    res = atleast_1d(thefunc(*((x0[:numinputs],) + args)))
  File "/Users/mlafleur/opt/anaconda3/envs/ogzaf-dev-test/lib/python3.9/site-packages/ogcore/SS.py", line 1069, in SS_fsolve
    ) = inner_loop(outer_loop_vars, p, client)
  File "/Users/mlafleur/opt/anaconda3/envs/ogzaf-dev-test/lib/python3.9/site-packages/ogcore/SS.py", line 274, in inner_loop
    etr_params_3D = [
  File "/Users/mlafleur/opt/anaconda3/envs/ogzaf-dev-test/lib/python3.9/site-packages/ogcore/SS.py", line 275, in <listcomp>
    [
  File "/Users/mlafleur/opt/anaconda3/envs/ogzaf-dev-test/lib/python3.9/site-packages/ogcore/SS.py", line 276, in <listcomp>
    [p.etr_params[-1][s][i] for i in range(num_params)]
  File "/Users/mlafleur/opt/anaconda3/envs/ogzaf-dev-test/lib/python3.9/site-packages/ogcore/SS.py", line 276, in <listcomp>
    [p.etr_params[-1][s][i] for i in range(num_params)]
IndexError: list index out of range

cc: @GuBenda

OG-ZAF not compatible with OG-Core 0.11.13

@jdebacker. I just recreated the conda environment ogzaf-dev that included the new version of OG-Core (v0.11.13). I ran the run_og_zaf.py file and got the following error. I think this is coming from the new pension adjustments.

(ogzaf-dev) richardevans@Richards-MacBook-Pro-2 examples % python run_og_zaf.py
Number of workers =  7
Traceback (most recent call last):
  File "/Users/richardevans/Docs/Economics/OSE/OG-ZAF/examples/run_og_zaf.py", line 139, in <module>
    main()
  File "/Users/richardevans/Docs/Economics/OSE/OG-ZAF/examples/run_og_zaf.py", line 49, in main
    p.update_specifications(
  File "/opt/anaconda3/envs/ogzaf-dev/lib/python3.11/site-packages/ogcore/parameters.py", line 440, in update_specifications
    self.adjust(revision, raise_errors=raise_errors)
  File "/opt/anaconda3/envs/ogzaf-dev/lib/python3.11/site-packages/paramtools/parameters.py", line 257, in adjust
    return self._adjust(
           ^^^^^^^^^^^^^
  File "/opt/anaconda3/envs/ogzaf-dev/lib/python3.11/site-packages/paramtools/parameters.py", line 375, in _adjust
    raise self.validation_error
paramtools.exceptions.ValidationError: {
    "errors": {
        "schema": [
            "Unknown field: AIME_num_years"
        ]
    }
}

Update documentation

The current documentation is a placeholder from OG-USA and should be updated to reflect the new sources and methods used in OG-ZAF. This includes:

  • Demographics
  • Macroeconomic parameters
  • Lifetime income profiles
  • Sector calibration and sources

Why are fertility and mortality rates adjusted from the original data?

Looking at the code in demographics.py, I noticed that the get_fert() function is rebinning the data from 100 years into 80 years by applying an adjustment (101/80) and then using this weight to compute new fertility rates for each of the 80 bins. (Mortality does this too).

One example starts in line 142:

# Calculate average fertility rate in each age bin using trapezoid

I’m trying to understand a bit more about the rationale for this transformation. I looked at the notes on calibration and it doesn’t explain why there is a need to do this 100 -> 80 rebinning.

Why is this needed? I know that there are 80 years of economic activity (‘totpers’) but I don’t understand why we can’t simply use the direct fertility data from the source. If this is necessary for the model, we should include the reason in the documentation.

Pass data start and end year arguments

Allow users to pass data start and end year for macro_params. These should be accessible via the Calibration class so they can be passed from their.

Question: We already have start and end years for demographics data (but these can be prospective). What's the most intuitive way to allow for multiple start and end years for data?

Create a "guesser" tool to identify possible first guesses of SS variables

In creating the baseline model it would be helpful to know which values for the steady state variables give valid solutions. A script could loop through the baseline model using different guesses and report back on the ones that did not fail. The loop should only include the validity check and does not need to run the entire solution to speed up the loop.

Calibrating multiple industries

We want to calibrate multiple industries in OG-ZAF.

What industries to calibrate?
The first step is to choose what industries to calibrate. I recommend the following, which come from the South African Reserve Bank Quarterly Bulletin:

  • Primary sector: Agriculture, forestry, and fishing + Mining and quarrying
  • Secondary sector (this sector must be the Mth industry because it will be the industry that produces goods that can be used as capital): Manufacturing + Electricity, gas and water + Construction (contractors)
  • Tertiary sector: Wholesale and retail trade, catering and accommodation + Transport, storage and communication + Finance, insurance, real estate and business services + Government services + Other

Another industry we could include is the informal sector, which is a production sector that does not pay corporate income taxes.

What parameters and what data are needed to calibrate these 3 or 4 industries?
Let m be the index of the mth industry with the total number of industries being M. Let i be the index of the ith consumption good with the total number of consumption goods being I. The theory for the parameters listed below is in the OG-Core documentation for the theory of the Firm and Households.

  • Number of industries M: Number of industries is set in ogzaf_default_parameters.json as the parameter scalar M.
  • Number of consumption goods I: The choice of the number of consumption goods depends on the research questions you want to answer and the quality of the input output data you have for the country. The cheapest calibration is to assume that each production industry is a good that households consume. In this case I = M. But I does not have to equal M in general.
  • Total factor productivity Z_m: TFP is represented in the theory as Z_m and its value is given by the list of lists parameter Z in ogzaf_default_parameters.json. For example, if M=3, a corresponding value for Z is Z = [[1.0., 1.2, 0.9]]. The Z value for a given industry is estimated by a standard log regression of industry output on industry capital stock and industry labor hours. Because the model units are endogenous, the important thing about calibrating the Z value for each industry is getting its relative size right. As such, it makes sense to normalize one of the industries TFP to 1.0 and set the other TPF values so that the relative size of the industry matches the output data.
  • Constant elasticity of substitution epsilon. A Cobb-Douglas specification for M=3 industries would be epsilon = [1.0, 1.0, 1.0].
  • For now, let's exclude public capital K_g from the production function using the gamma_g parameter. If the number of industries is M=3, the correct specification would be gamma_g = [0.0, 0.0, 0.0]. This parameter allows us to account for government infrastructure investment that goes into the production function of each industry. Although this is sufficient, we should set initial_Kg_ratio to zero as well.
  • Capital share of income. In the documentation of the theory, this parameter is gamma_m. In the ogzaf_default_parameters.json, the parameter is gamma and is a list of scalars strictly between 0 and 1. For example, if M=3 a valid specification would be gamma = [0.2, 0.5, 0.3]. Because labor cost data for industries is usually more accurate than capital cost data, we calibrate the capital share of income by calculating the labor share of income. Then the capital share is 1 - labor share. Labor share = total labor compensation / output, where total labor compensation is employee compensation plus proprietor compensation.
  • Input-output matrix io_matrix: This parameter maps production industry output to consumption categories. The easiest calibration for a model with M=3 and I=3 is a 3 x 3 identity matrix input into ogzaf_default_parameters.json in the following way.
"io_matrix": [
                    [1.0, 0.0, 0.0],
                    [0.0, 1.0, 0.0],
                    [0.0, 0.0, 1.0]
                ],
  • Consumption share in consumption aggregator function alpha_c: In the documentation, the consumption share is defined as alpha_i (see equation 10). In OG-Core, the parameter is alpha_c. A valid calibration for a model with I=3 consumption goods would be alpha_c = [0.3, 0.4, 0.3].
  • Corporate income tax rate cit_rate
  • Consumption tax rate tau_c

cc: @SeaCelo @sarvonz @jdebacker

New source for r_gov_shift and r_gov_scale

Currently the calculation of r_gov_shift and r_gov_scale (in macro_params.py) uses data for US corporate and sovereign yields because this data is readily available in the FRED dataset. Corporate yields for other countries are only available from paid sources. The US financial system is a poor proxy for our needs and we want to use a default estimate of these two parameters that is more relevant to developing and emerging countries.

Since we lack access to the actual corporate yield data, my proposal is to use the results of papers that did have access to the data and estimated the correlation between sovereign and corporate yields for emerging markets. We can use their estimates to compute the parameters for our default calibration.

I have two possible sources that estimated the correlation:

  1. "Corporate yields and sovereign yields". This paper uses a much larger dataset (79,332 bonds issued by firms from 22 advanced economies and 22 emerging economies) and many more controls. Unfortunately it does not report the estimated constant for their model. It would be possible to guess the constant to fit the published line, but this is not my first choice.
  2. "The Long-Run Impact of Sovereign Yields on Corporate Yields in Emerging Markets". This paper uses data for 2,045 corporate bonds issued by 1,040 companies, and 431 sovereign bonds, from 46 emerging economies. It reports the correlation coefficients and the constant. I recommend using this one.

Figure 3 reports the coefficients for a quadratic fitted relation between corporate yields (y) and sovereign yields (x): y=−2.975x+0.478x^2+8.199. We can use these coefficients and the constant to:

  1. compute the estimated values of y for a range of x
  2. use the estimated values to fit a linear relation in the reverse direction since the model expects sovereign yields to be the dependent variable.

This will give us an estimate for r_gov_shift and r_gov_scale based on the modelled relationship estimated in the paper.

Filter demographic data by forecast type

In demographics.py, the functions get_un_fert_data() and get_un_mort_data() work as expected if the range start_year to end_year does not include a forecasted year. For example, from 2021 to 2021. However, for any year that is computed using forecasts, the data returns every forecast type, defined in the table below. In this case the data will have an unexpected number of observations, affecting the subsequent calculations. It is important to only return the appropriate forecast type when the data range includes forecasted data.

Forecast types:

VariantId Variant
4 Median
9 High-fertility
10 Low-fertility
11 Constant-fertility
12 Instant-replacement-fertility
13 Zero-migration
14 Constant-mortality
15 No change
16 Momentum
17 Instant-replacement-zero-migration

More here: https://population.un.org/wpp/DefinitionOfProjectionScenarios/

I'm suggesting we filter the data returned according to the forecast type, with a default = 4 (median). The user can change this if a different forecast model is desired. This forecast type should also be used to filter the data once it is returned.
add:

    download: bool = True,
    variant: int = 4,

Also change the description of the function to add:
variant (int): the model variant used by the UN to generate forecasts. Default is "median" (4).

Keep the variable variantID and use it to filter the data:

usecols=["TimeLabel", "SexId", "Sex", "AgeStart", "Value", "VariantId"],
        float_precision="round_trip",
    )
    pop_df=pop_df.loc[pop_df['VariantId'] == variant]

and

        usecols=["TimeLabel", "AgeStart", "Value", "VariantId"],
        float_precision="round_trip",
    )
    fert_rates_df=fert_rates_df.loc[fert_rates_df['VariantId'] == variant]

Apply these changes to the fertility and the mortality functions. This needs to be made robust if the date range does not include a forecast, and a variantId isn't returned.

More detail in the diff to the demographics.py file here:
https://github.com/EAPD-DRB/OG-ZAF/pull/14/files#diff-8cd0cb371a044cd5496d20218de7bd37f4c11988b2052cf91303a83217ac8530

Originally posted by @SeaCelo in #14 (comment)

Labor share data from UN Statistics is returning a server error

Add option to use local data in macro_params.py

macro_params.py uses calls to APIs. It would be good to have the option via a boolean, to load local data from a .csv file, in the same way that demographics.py does.

This improves on the current PR #17

MKL install dependencies error in M1 Mac

Note: This problem occurs on a M1 Mac that uses miniforge instead of standard Anaconda installation. Numpy installs openblas instead of mkl for acceleration (the former is ARM native).

The command conda env create -f environment.yml results in an installation error:

ResolvePackageNotFound: 
  - mkl[version='>=2020.2']

It is possible to comment out line 6 of environment.yml to get past this error. However, the error returns on line 34 during the pip installation of ogcore, which also has MKL as a dependency.

To avoid having to install MKL and run it under rosetta emulation, we need to ignore the MKL dependency.

Suggested (manual) solution:

  1. comment out lines 6 and 34 of environment.yml.
  2. after running the conda env create -f environment.yml step, run:
  • conda activate ogzaf-dev
  • pip install --no-dependencies ogcore
  • continue with the installation process: pip install -e .

Using these steps I successfully installed OG-ZAF and ran the test python run_og_zaf.py.
Baseline run time: 1116.8 seconds (18.6 minutes)
Reform run time: 1092.3 (18.2 minutes)
Total run time: 2209.1 seconds (36.8 minutes)

SSL problems when accessing the UN Data Portal API

This is related to this issue discussion: and to this PR: jdebacker/OG-MYS#1

Summary: To access demographic data from the UN's Data Portal API, we ran into issues with how the server negotiated SSL sessions. A quick solution was to downgrade openssl to version 1.1.1. Unfortunately, this creates conflicts with other packages as discussed in this PR thread.

The solution is to fix the code that access the UN Data Portal to no longer need the older version of openssl.

cc: @rickecon @jdebacker

Some Data Suggestions

Hi all

Some additional data sources to consider (I haven't used many of these since I was a MCom student, but they are the 'go to' sources)

NIDS- National Income Dynamics Survey. Really great panel for micro-type survey data to calibrate consumption equations. http://www.nids.uct.ac.za/nids-data/data-access

QLFS and QES- Quarterly Labour Force Statistics and Quarterly Employment Statistics available from Stats SA. Available on http://nesstar.statssa.gov.za:8282/webview/ with a few more survey data sources you may be interested in (Tourism, live births, etc).

Education data: PIRLS and TIMSS https://www.iea.nl/studies/iea/pirls. (perhaps not so relevant)

Income and Expenditure Survey (IES): Should be on nesstar as above

Quarterly Bulletin is super detailed, available on EconData and once you know what code is which, its easy to pull and my all time favourite publication. It includes a lot of information derived from other sources too. https://www.resbank.co.za/en/home/publications/publication-detail-pages/quarterly-bulletins/quarterly-bulletin-publications/2024/FullQuarterlyBulletinNo312June2024 .

Give me a shout, happy to assist online and/in November.

Issue with openssl in setup.py

PR #25 had a commit right before it was merged that added "openssl=1.1.1", to setup.py. This was my fault that this snuck in as @SeaCelo was merging the PR, so it didn't get tested. This has two problems.

  • In setup.py, pinning the package needs to have the following form with two equals signs: "openssl==1.1.1"
  • Even with this specified correctly, which I have done in a new branch, the pip install of the ogzaf package through pip install -e . after creating the ogzaf-dev conda environment fails with the following message.
ERROR: Could not find a version that satisfies the requirement openssl==1.1.1 (from ogzaf) (from versions: none)
ERROR: No matching distribution found for openssl==1.1.1

When I activate the ogzaf-dev conda environment and type conda list openssl, I get the following:

# packages in environment at /Users/richardevans/opt/anaconda3/envs/ogzaf-dev:
#
# Name                    Version                   Build  Channel
openssl                   1.1.1s               h03a7124_1    conda-forge
pyopenssl                 23.0.0             pyhd8ed1ab_0    conda-forge

This should be an easy fix, but I don't know what it is right now. @SeaCelo @sarvonz

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.