Giter Club home page Giter Club logo

postgres-aws-s3's Introduction

postgres-aws-s3

Starting on Postgres version 11.1, AWS RDS added support for S3 import using the extension aws_s3. It allows to import data from S3 within Postgres using the function aws_s3.table_import_from_s3 and export the data to S3 using the function aws_s3.query_export_to_s3.

In order to support development either on RDS or locally, we implemented our own aws_s3 extension that is similar to the one provided in RDS. It was implemented in Python using the boto3 library.

Installation

Make sure boto3 is installed using the default Python 3 installed on your computer. On MacOS, this can be done as follows:

sudo /usr/bin/easy_install boto3

Then clone the repository postgres-aws-s3:

git clone [email protected]:chimpler/postgres-aws-s3

Make sure that pg_config can be run:

$ pg_config 

BINDIR = /Applications/Postgres.app/Contents/Versions/13/bin
DOCDIR = /Applications/Postgres.app/Contents/Versions/13/share/doc/postgresql
HTMLDIR = /Applications/Postgres.app/Contents/Versions/13/share/doc/postgresql
INCLUDEDIR = /Applications/Postgres.app/Contents/Versions/13/include
PKGINCLUDEDIR = /Applications/Postgres.app/Contents/Versions/13/include/postgresql
INCLUDEDIR-SERVER = /Applications/Postgres.app/Contents/Versions/13/include/postgresql/server
LIBDIR = /Applications/Postgres.app/Contents/Versions/13/lib
...

Then install postgres-aws-s3:

make install

Finally in Postgres:

psql> CREATE EXTENSION plpython3u;
psql> CREATE EXTENSION aws_s3;

If you already have an old version of aws_s3 installed, you might want to drop and recreate the extension:

psql> DROP EXTENSION aws_s3;
psql> CREATE EXTENSION aws_s3;

Using aws_s3

Importing data using table_import_from_s3

Let's create a table that will import the data from S3:

psql> CREATE TABLE animals (
    name TEXT,
    age INT
);

Let's suppose the following file is present in s3 at s3://test-bucket/animals.csv:

name,age
dog,12
cat,15
parrot,103
tortoise,205

The function aws_s3.table_import_from_s3 has 2 signatures that can be used.

Using s3_uri and aws_credentials objects

aws_s3.table_import_from_s3 (
   table_name text, 
   column_list text, 
   options text, 
   s3_info aws_commons._s3_uri_1,
   credentials aws_commons._aws_credentials_1,
   endpoint_url text default null
)

Using this signature, the s3_uri and aws_credentials objects will need to be created first:

Parameter Description
table_name the name of the table
column_list list of columns to copy
options options passed to the COPY command in Postgres
s3_info An aws_commons._s3_uri_1 composite type containing the bucket, file path and region information about the s3 object
credentials An aws_commons._aws_credentials_1 composite type containing the access key, secret key, session token credentials
endpoint_url optional endpoint to use (e.g., http://localhost:4566)
Example
psql> SELECT aws_commons.create_s3_uri(
   'test-bucket',
   'animals.csv',
   'us-east-1'
) AS s3_uri \gset

psql> \echo :s3_uri
(test-bucket,animals.csv,us-east-1)

psql> SELECT aws_commons.create_aws_credentials(
   '<my_access_id>',
   '<my_secret_key>',
   '<session_token>'
) AS credentials \gset

psql> \echo :credentials
(<my_access_id>,<my_secret_key>,<session_token>)

psql> SELECT aws_s3.table_import_from_s3(
   'animals',
   '',
   '(FORMAT CSV, DELIMITER '','', HEADER true)',
   :'s3_uri',
   :'credentials'
);

 table_import_from_s3
----------------------
                    4
(1 row)

psql> select * from animals;
   name   | age
----------+-----
 dog      |  12
 cat      |  15
 parrot   | 103
 tortoise | 205
(4 rows)

You can also call the function as:

psql> SELECT aws_s3.table_import_from_s3(
   'animals',
   '',
   '(FORMAT CSV, DELIMITER '','', HEADER true)',
   aws_commons.create_s3_uri(
      'test-bucket',
      'animals.csv',
      'us-east-1'
   ),
   aws_commons.create_aws_credentials(
      '<my_access_id>',
      '<my_secret_key>',
      '<session_token>'
   )
);

Using the function table_import_from_s3 with all the parameters

aws_s3.table_import_from_s3 (
   table_name text,
   column_list text,
   options text,
   bucket text,
   file_path text,
   region text,
   access_key text,
   secret_key text,
   session_token text,
   endpoint_url text default null
) 
Parameter Description
table_name the name of the table
column_list list of columns to copy
options options passed to the COPY command in Postgres
bucket S3 bucket
file_path S3 path to the file
region S3 region (e.g., us-east-1)
access_key aws access key id
secret_key aws secret key
session_token optional session token
endpoint_url optional endpoint to use (e.g., http://localhost:4566)
Example
psql> SELECT aws_s3.table_import_from_s3(
    'animals',
    '',
    '(FORMAT CSV, DELIMITER '','', HEADER true)',
    'test-bucket',
    'animals.csv',
    'us-east-1',
    '<my_access_id>',
    '<my_secret_key>',
    '<session_token>'
);

 table_import_from_s3
----------------------
                    4
(1 row)

psql> select * from animals;

   name   | age
----------+-----
 dog      |  12
 cat      |  15
 parrot   | 103
 tortoise | 205
(4 rows)

If you use localstack, you can set endpoint_url to point to the localstack s3 endpoint:

psql> SET aws_s3.endpoint_url TO 'http://localstack:4566'; 

You can also set the AWS credentials:

psql> SET aws_s3.access_key_id TO 'dummy';
psql> SET aws_s3.secret_key TO 'dummy';
psql> SET aws_s3.session_token TO 'dummy';

and then omit them from the function calls.

For example:

psql> SELECT aws_s3.table_import_from_s3(
    'animals',
    '',
    '(FORMAT CSV, DELIMITER '','', HEADER true)',
    'test-bucket',
    'animals.csv',
    'us-east-1'
);

You can pass them also as optional parameters. For example:

psql> SELECT aws_s3.table_import_from_s3(
    'animals',
    '',
    '(FORMAT CSV, DELIMITER '','', HEADER true)',
    'test-bucket',
    'animals.csv',
    'us-east-1',
    endpoint_url := 'http://localstack:4566'
);

Support for gzip files

If the file has the metadata Content-Encoding=gzip in S3, then the file will be automatically unzipped prior to be copied to the table. One can update the metadata in S3 by following the instructions described here.

Exporting data using query_export_to_s3

Documentation: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/postgresql-s3-export.html

Similarly to the import functions, you can export the data using different methods.

Using s3_uri and aws_credentials objects

aws_s3.query_export_to_s3(
    query text,    
    s3_info aws_commons._s3_uri_1,
    credentials aws_commons._aws_credentials_1 default null,
    options text default null, 
    endpoint_url text default null
)

Using this signature, the s3_uri and optionally aws_credentials objects will need to be created first:

Parameter Description
query query that returns the data to export
s3_info An aws_commons._s3_uri_1 composite type containing the bucket, file path and region information about the s3 object
credentials An aws_commons._aws_credentials_1 composite type containing the access key, secret key, session token credentials
options options passed to the COPY command in Postgres
endpoint_url optional endpoint to use (e.g., http://localhost:4566)
Example
psql> SELECT * FROM aws_s3.query_export_to_s3(
   'select * from animals',
   aws_commons.create_s3_uri(
      'test-bucket',
      'animals2.csv',
      'us-east-1'
   ),
   aws_commons.create_aws_credentials(
      '<my_access_id>',
      '<my_secret_key>',
      '<session_token>'
   ),
   options := 'FORMAT CSV, DELIMITER '','', HEADER true'
);

If you set the AWS credentials:

psql> SET aws_s3.aws_s3.access_key_id TO 'dummy';
psql> SET aws_s3.aws_s3.secret_key TO 'dummy';
psql> SET aws_s3.session_token TO 'dummy';

You can omit the credentials.

Example

Using the function table_import_from_s3 with all the parameters

aws_s3.query_export_to_s3(
    query text,    
    bucket text,    
    file_path text,
    region text default null,
    access_key text default null,
    secret_key text default null,
    session_token text default null,
    options text default null, 
    endpoint_url text default null   
)
Parameter Description
query query that returns the data to export
bucket S3 bucket
file_path S3 path to the file
region S3 region (e.g., us-east-1)
access_key aws access key id
secret_key aws secret key
session_token optional session token
options options passed to the COPY command in Postgres
endpoint_url optional endpoint to use (e.g., http://localhost:4566)
Example
psql> SELECT * FROM aws_s3.query_export_to_s3(
   'select * from animals',
   'test-bucket',
   'animals.csv',
   'us-east-1',
    '<my_access_id>',
    '<my_secret_key>',
    '<session_token>',
   options:='FORMAT CSV, HEADER true'
);

 rows_uploaded | files_uploaded | bytes_uploaded
---------------+----------------+----------------
             5 |              1 |             47

If you set the AWS credentials:

psql> SET aws_s3.aws_s3.access_key_id TO 'dummy';
psql> SET aws_s3.aws_s3.secret_key TO 'dummy';
psql> SET aws_s3.session_token TO 'dummy';

You can omit the credential fields.

Docker Compose

We provide a docker compose config to run localstack and postgres in docker containers. To start it:

$ docker-compose up

It will initialize a s3 server on port 4566 with a bucket test-bucket:

aws s3 --endpoint-url=http://localhost:4566 ls s3://test-bucket

You can connect to the postgres server:

$ psql -h localhost -p 15432 -U test test 
(password: test)

Initialize the extensions:

psql> CREATE EXTENSION plpythonu;
psql> CREATE EXTENSION aws_s3;

Set the endpoint url and the aws keys to use s3 (in localstack you can set the aws creds to any non-empty string):

psql> SET aws_s3.endpoint_url TO 'http://localstack:4566';
psql> SET aws_s3.aws_access_key_id TO 'dummy';
psql> SET aws_s3.secret_access_key TO 'dummy';

Create a table animals:

psql> CREATE TABLE animals (
    name TEXT,
    age INT
);

psql> INSERT INTO animals (name, age) VALUES
('dog', 12),
('cat', 15),
('parrot', 103),
('tortoise', 205);

Export it to s3:

psql> select * from aws_s3.query_export_to_s3('select * from animals', 'test-bucket', 'animals.csv', 'us-east-1', options:='FORMAT CSV, HEADER true');
 rows_uploaded | files_uploaded | bytes_uploaded
---------------+----------------+----------------
             5 |              1 |             47

Import it back to another table:

psql> CREATE TABLE new_animals (LIKE animals);
psql> select * from aws_s3.query_export_to_s3('select * from animals', 'test-bucket', 'animals.csv', 'us-east-1', options:='FORMAT CSV, HEADER true');
 rows_uploaded | files_uploaded | bytes_uploaded
---------------+----------------+----------------
             4 |              1 |             38

psql> SELECT aws_s3.table_import_from_s3(
    'new_animals',
    '',
    '(FORMAT CSV, HEADER true)',
    'test-bucket',
    'animals.csv', 'us-east-1'
);
 table_import_from_s3
----------------------
                    4
(1 row)

psql> SELECT * FROM new_animals;
   name   | age
----------+-----
 dog      |  12
 cat      |  15
 parrot   | 103
 tortoise | 205
(4 rows)

Contributors

Thanks

postgres-aws-s3's People

Contributors

darthbear avatar huiser avatar jleto avatar mjgp2 avatar oyarushe avatar phileon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

postgres-aws-s3's Issues

rows_uploaded giving incorrect value for a table having 2,400,000,000 (2.4 billion rows)

Hi Sir, im trying to export data from Aurora Postgres table having 2.4 billion rows to S3. Although function runs successfully, but it is giving wrong output in rows_uploaded . This is the output i received {"rows_uploaded":-1894967296,"files_uploaded":4,"bytes_uploaded":25619111689} . Although rows_upladed is of BIGINT and it should have handled this value but it is not doing so. may be because of some casting underlying the function. Kindly look into this on urgent basis as this is the production issue for us.

Error while export using aws_s3.query_export_to_s3 function.

Hello! I'm just trying to use your aws_s3 extention.

When runing this query

SELECT * FROM aws_s3.query_export_to_s3(
query => '
select
json_agg(post_main_approved)
from asmo.post_main_approved
limit 10 ;
',
bucket => 'arn:aws:s3:::asmo-ij-json',
file_path => 'test.json',
region => 'eu-central-1a',
access_key => '<access_key>',
secret_key => '<secret_key>'
);

then get this error:

error fetching next item from iterator. spiexceptions.SyntaxError: syntax error at or near ";"

Can you help to deal with this issue? Thanks!

Export support?

Is anyone here interested in extending this to support exports to s3?

Those were added back in February in Aurora 3.1.0/11.6.

See here for details:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/postgresql-s3-export.html

The function ref is here: but it looks like this:

aws_s3.query_export_to_s3(
    query text,    
    s3_info aws_commons._s3_uri_1,    
    options text
)

An example to dump JSON to S3 in JSONL format:

create extension aws_s3 cascade;

select * from aws_s3.query_export_to_s3(
    'select row_to_json(test) from (YOUR QUERY GOES HERE) test',
    aws_commons.create_s3_uri(
        'example-bucket/some/path',
        'somedata.jsonl',
        'us-east-1'),
    options :='format text');

segmentation fault

Hi,
Thank you for this script I've an issue in postgres RDS instance running in a 16 GB instance when I used your query.
LOG out:

2019-07-01 14:46:15 UTC::@:[20878]:LOG: server process (PID 10308) was terminated by signal 11: Segmentation fault
2019-07-01 14:46:15 UTC::@:[20878]:DETAIL: Failed process was running: SELECT aws_s3.table_import_from_s3(
'mb."type details"',
'',
'(DELIMITER '','', NULL '''')',
'#my bucket',
'csvfiles/07-01-2019/type details/type details_0002_part_00',
'us-west-2',
'#my cred',
'#my cred',
''
);

Thank you

table_import_from_s3 in localstack has Access Key error

I am trying to import s3 data from localstack, using:

select aws_s3.table_import_from_s3(
'tablename',
'col1,col2',
'(format csv, header true)', 
aws_commons.create_s3_uri('my-bucket','test.csv','us-west-2'),
aws_commons.create_aws_credentials('none','none',''),
'http://localstack:4566');

none is what I use for all localstack calls but using that results in:

ERROR: spiexceptions.ExternalRoutineException: botocore.exceptions.ClientError: An error occurred (InvalidAccessKeyId) when calling the GetObject operation: The AWS Access Key Id you provided does not exist in our records.
CONTEXT: Traceback (most recent call last):
PL/Python function "table_import_from_s3", line 7, in
return plan.execute(
PL/Python function "table_import_from_s3"

I have ensured this bucket is publicly accessible via:

AWS_PAGER="" \
AWS_ACCESS_KEY_ID=none \
AWS_SECRET_ACCESS_KEY=none \
aws \
  --endpoint-url=http://localstack:4566 \
  --region=us-west-2 \
  s3api put-bucket-policy \
    --bucket my-bucket \
    --policy '{
      "Id": "Policy1397632521960",
      "Statement": [
        {
          "Sid": "Stmt1397633323327",
          "Action": [
            "s3:GetObject"
          ],
          "Effect": "Allow",
          "Resource": "arn:aws:s3:::my-bucket/*",
          "Principal": {
          "AWS": [
            "*"
          ]
        }
      }
    ]
  }'

ModuleNotFoundError: No module named 'boto3' Despite Installing 'boto3'

I'm running a Node server along with Postgres-12. I have save a stored function that uses query_export_to_s3 to generate a report and export it to s3. However I always run into this problem whenever the stored function gets executed. Here's the stack trace-

DatabaseError [SequelizeDatabaseError]: spiexceptions.ExternalRoutineException: error fetching next item from iterator
    at Query.formatError (/home/Sam/Documents/GitHub/cms-express-api/node_modules/sequelize/lib/dialects/postgres/query.js:386:16)
    at Query.run (/home/Sam/Documents/GitHub/cms-express-api/node_modules/sequelize/lib/dialects/postgres/query.js:87:18)
    at processTicksAndRejections (internal/process/task_queues.js:97:5) {
  parent: error: spiexceptions.ExternalRoutineException: error fetching next item from iterator
      at Parser.parseErrorMessage (/home/Sam/Documents/GitHub/cms-express-api/node_modules/pg-protocol/src/parser.ts:369:69)
      at Parser.handlePacket (/home/Sam/Documents/GitHub/cms-express-api/node_modules/pg-protocol/src/parser.ts:188:21)
      at Parser.parse (/home/Sam/Documents/GitHub/cms-express-api/node_modules/pg-protocol/src/parser.ts:103:30)
      at Socket.<anonymous> (/home/Sam/Documents/GitHub/cms-express-api/node_modules/pg-protocol/src/index.ts:7:48)
      at Socket.emit (events.js:314:20)
      at Socket.EventEmitter.emit (domain.js:483:12)
      at addChunk (_stream_readable.js:297:12)
      at readableAddChunk (_stream_readable.js:272:9)
      at Socket.Readable.push (_stream_readable.js:213:10)
      at TCP.onStreamRead (internal/stream_base_commons.js:188:23) {
    length: 349,
    severity: 'ERROR',
    code: '38000',
    detail: "ModuleNotFoundError: No module named 'boto3'",
    hint: undefined,
    position: undefined,
    internalPosition: undefined,
    internalQuery: undefined,
    where: 'Traceback (most recent call last):\n' +
      '  PL/Python function "query_export_to_s3", line 6, in <module>\n' +
      '    return plan.execute(\n' +
      'PL/Python function "query_export_to_s3"',
    schema: undefined,
    table: undefined,
    column: undefined,
    dataType: undefined,
    constraint: undefined,
    file: 'plpy_elog.c',
    line: '125',
    routine: 'PLy_elog_impl',
    sql: `begin; select * from public.stored_function_weather_data_export('select * from public."WeatherAA12"','04-11-2022/weather-report-1667546728087.csv'); fetch all in "cursor_data"; close cursor_data;commit;`,
    parameters: undefined
  },
  original: error: spiexceptions.ExternalRoutineException: error fetching next item from iterator
      at Parser.parseErrorMessage (/home/Sam/Documents/GitHub/cms-express-api/node_modules/pg-protocol/src/parser.ts:369:69)
      at Parser.handlePacket (/home/Sam/Documents/GitHub/cms-express-api/node_modules/pg-protocol/src/parser.ts:188:21)
      at Parser.parse (/home/Sam/Documents/GitHub/cms-express-api/node_modules/pg-protocol/src/parser.ts:103:30)
      at Socket.<anonymous> (/home/Sam/Documents/GitHub/cms-express-api/node_modules/pg-protocol/src/index.ts:7:48)
      at Socket.emit (events.js:314:20)
      at Socket.EventEmitter.emit (domain.js:483:12)
      at addChunk (_stream_readable.js:297:12)
      at readableAddChunk (_stream_readable.js:272:9)
      at Socket.Readable.push (_stream_readable.js:213:10)
      at TCP.onStreamRead (internal/stream_base_commons.js:188:23) {
    length: 349,
    severity: 'ERROR',
    code: '38000',
    detail: "ModuleNotFoundError: No module named 'boto3'",
    hint: undefined,
    position: undefined,
    internalPosition: undefined,
    internalQuery: undefined,
    where: 'Traceback (most recent call last):\n' +
      '  PL/Python function "query_export_to_s3", line 6, in <module>\n' +
      '    return plan.execute(\n' +
      'PL/Python function "query_export_to_s3"',
    schema: undefined,
    table: undefined,
    column: undefined,
    dataType: undefined,
    constraint: undefined,
    file: 'plpy_elog.c',
    line: '125',
    routine: 'PLy_elog_impl',
    sql: `begin; select * from public.stored_function_weather_data_export('select * from public."WeatherAA12"','04-11-2022/weather-report-1667546728087.csv'); fetch all in "cursor_data"; close cursor_data;commit;`,
    parameters: undefined
  },
  sql: `begin; select * from public.stored_function_weather_data_export('select * from public."WeatherAA12"','04-11-2022/weather-report-1667546728087.csv'); fetch all in "cursor_data"; close cursor_data;commit;`,
  parameters: undefined
}

I have already installed boto3 by executing - pip3 install boto3 but it simply doesn't seem to work.
Lastly I also tried installing boto3 by cloning and building from it's source files following the instructions at boto3
No success there either.
I'm new to python and AWS. Perhaps I'm making a rookie mistake so please bear with me.

Can someone please point out where I'm going wrong ?

system-defined vs user-defined metadata for Content-Encoding on S3 object misses gzip

Screen Shot 2021-07-12 at 10 33 00 AM

Some tools like pyspark dataframe s3 write does not set AWS system-defined metadata when writing and specifying "gzip" compression. Postgres s3_import extension fails to identify gzip file as gzip compressed. This results in a UTF-8 encoding error (reported byte sequence is the start of a gzip byte sequence). The real error is that postgres did not recognize it as gzip encoding and attempted to process it like an uncompresed text file.

If you attempt to add "Content-Encoding" = "gzip" metadata programmatically, this is added as user-defined metadata. When viewing this metadata via the AWS console after adding this metadata is appears with the key name of "x-amz-meta-content-encoding" and this is not tested for in the extension code.

I will submit a pull request.

rows_uploaded should not include the header row

If calling query_export_to_s3 with HEADER true in the options the rows are exported with a header but the returned rows_uploaded includes the header in the number of rows uploaded, unlike in AWS which excludes it from the result.

How to remove the file from S3 after import

Hi have a question, don't know could be something i missed but anyway
like to use postgres copy command to export/import data to RDS from outside RDS so aws-s3 extension looks great
so let's say we on a regular bases call a job using aws-s3 export function to export data from a number of tables in csv format to s3 storage (using federated dblinks outside RDS ) then import csv files using aws-s3 import function.
Fine but how to clean up s3 storage and remove the csv files from s3 after file import?
kind of missing a delete function to remove the file from s3 after import to RDS
We have a similar setup today using a plsh function to make OS calls and works great but guess not supported in RDS

Big file timeouts

Hello, if I try to import a big csv file boto s3 timeouts after ~20mins (maybe 15) is there a way to increase timeout?

Thanks

Permission denied. Check the bucket is in the region provided.

I connect to my actual s3 bucket just fine with the following uri (Rust):

let s3_uri = format!("aws_commons.create_s3_uri('{bucket}','{key}','us-west-2')");
let copy_query = format!("SELECT aws_s3.table_import_from_s3(\
                    '{table}', \
                    '{columns}', \
                    '(format csv, header true, encoding utf8)', \
                    {s3_uri} \
                );");

But when running locally with a localstack S3 bucket (in us-west-2) and this plugin I get:

Permission denied. Check the bucket is in the region provided.

I trigger this lambda by uploading to the bucket like:

AWS_PAGER="" \
AWS_ACCESS_KEY_ID=none \
AWS_SECRET_ACCESS_KEY=none \
aws \
  --endpoint-url=http://localstack:4566 \
  --region=us-west-2 \
  s3 cp \
    ../ex/test.csv \
    s3://my-bucket/test.csv

Anyone seen this before? I try googling that error but nothing matches for localstack or AWS cli. I try running without specifying us-west-2 but get same error.

Add support for gzip exports

There's support to import gzipped files in table_import_from_s3 but no support to export files as gzip in query_export_to_s3.

Unable to compile - make error

I get the following error on AWS Linux2 when doing a make install

Makefile:7: /usr/lib64/pgsql/pgxs/src/makefiles/pgxs.mk: No such file or directory
make: *** No rule to make target `/usr/lib64/pgsql/pgxs/src/makefiles/pgxs.mk'.  Stop.

language "plpythonu" does not exist

After installing this with postgresql-server-dev-10 on Ubuntu 18.04 and trying to dynamically enable it:

>       cursor.execute(statement, parameters)
E       sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedObject) language "plpythonu" does not exist
E       HINT:  Use CREATE LANGUAGE to load the language into the database.
E       
E       [SQL: CREATE EXTENSION aws_s3 CASCADE;]
E       (Background on this error at: http://sqlalche.me/e/f405)

update: fix

    conn.execute("CREATE LANGUAGE plpythonu;")
    conn.execute("CREATE EXTENSION aws_s3 CASCADE;")

installation

sudo apt-get install postgresql-server-dev-10 \
  postgresql-plpython3-10 postgresql-plpython-10 \
  postgis postgis-gui postgresql-10-postgis-scripts postgresql-10-pgrouting 
cd src/
git clone https://github.com/chimpler/postgres-aws-s3.git
cd postgres-aws-s3/
pg_config # confirm that works
sudo make install

versions

$ python --version
Python 3.6.10

$ pip freeze | grep oto
boto==2.49.0
boto3==1.12.47
botocore==1.15.47
moto==1.3.14

$ pip freeze | grep psyc
aws-psycopg2==1.2.1

$ pip freeze | grep -i sql
Flask-SQLAlchemy==2.4.1
pytest-postgresql==2.3.0
SQLAlchemy==1.3.16

$ psql --version
psql (PostgreSQL) 10.12 (Ubuntu 10.12-0ubuntu0.18.04.1)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.