Giter Club home page Giter Club logo

appengine-python-standard's Introduction

Google App Engine bundled services SDK for Python 3

This is a release of the App Engine services SDK for Python 3. It provides access to various services and API endpoints that were previously only available on the Python 2.7 runtime.

See the documentation to learn more about using this SDK, and learn more about it in this product announcement (Fall 2021).

Additional examples (Datastore [NDB], Task Queues [push tasks], Memcache) can be found in the App Engine migration repo. (Specifically look for samples whose folders have a b but where the Python 2 equivalent folder does not have an a, meaning this SDK is required, e.g., Modules 1 [mod1 and mod1b], 7, 12, etc.)

Using the SDK

In your requirements.txt file, add the following:

appengine-python-standard>=1.0.0

To use a pre-release version (Eg. 1.0.1-rc1), modify the above line to appengine-python-standard>=[insert_version] (Eg. appengine-python-standard>=1.0.1-rc1).

In your app's app.yaml, add the following:

app_engine_apis: true

In your main.py, import google.appengine.api.wrap_wsgi_app() and call it on your WSGI app object.

Example for a standard WSGI app:

import google.appengine.api

def app(environ, start_response):
    start_response('200 OK', [('Content-Type', 'text/plain')])
    yield b'Hello world!\n'

app = google.appengine.api.wrap_wsgi_app(app)

Example for a Flask app:

import google.appengine.api
from flask import Flask, request

app = Flask(__name__)
app.wsgi_app = google.appengine.api.wrap_wsgi_app(app.wsgi_app)

Then deploy your app as usual, with gcloud app deploy. The following modules are available:

  • google.appengine.api.app_identity
  • google.appengine.api.background_thread
  • google.appengine.api.blobstore
  • google.appengine.api.capabilities
  • google.appengine.api.croninfo
  • google.appengine.api.dispatchinfo
  • google.appengine.api.images
  • google.appengine.api.mail
  • google.appengine.api.memcache
  • google.appengine.api.modules
  • google.appengine.api.oauth
  • google.appengine.api.runtime
  • google.appengine.api.search
  • google.appengine.api.taskqueue
  • google.appengine.api.urlfetch
  • google.appengine.api.users
  • google.appengine.ext.blobstore
  • google.appengine.ext.db
  • google.appengine.ext.gql
  • google.appengine.ext.key_range
  • google.appengine.ext.ndb
  • google.appengine.ext.testbed

Using the development version of the SDK

To install the code from the main branch on GitHub rather than the latest version published to PyPI, put this in your requirements.txt file instead of appengine-python-standard:

https://github.com/GoogleCloudPlatform/appengine-python-standard/archive/main.tar.gz

appengine-python-standard's People

Contributors

asriniva avatar embray avatar estrellis avatar kritkasahni-google avatar myelin avatar phil-lopreiato avatar shreejad avatar sriram-mahavadi avatar wescpy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

appengine-python-standard's Issues

Allow using urllib3 v2+

Expected Behavior

It should be possible to install and use appengine-python-standard with recent versions of urllib3.

Actual Behavior

appengine-python-standard restricts urllib3 to <v2. This limits the ability to install and use it with other libraries/packages.

Specifications

  • Version: latest
  • Platform: any

urllib3 changelog for v2.0.0 if that helps with checking compatibility and making any required changes.

๐Ÿ™

TypeError while calling memory_usage().current()

Expected Behavior

No errors.

Actual Behavior

TypeError: 'float' object is not callable

Steps to Reproduce the Problem

from google.appengine.api import runtime
memory_usage = runtime.runtime.memory_usage().current()

Specifications

The documentation shows that current() should be an accessor and not a float field. This is how it worked in the Python 2 runtime.
https://cloud.google.com/appengine/docs/standard/python3/reference/services/bundled/google/appengine/api/runtime/memory_usage

  • Version: 0.3.1
  • Platform: App Engine python39 runtime

NDB projection queries broken when using the datastore v3 stub

Expected Behavior

Projection queries should work as normal when using the datastore stub for testing.

Actual Behavior

Performing a simple projection query when the datastore stub is active results in the following exception:

>>> Foo.query(projection=[Foo.b]).get()                                                                                                                                                       
WARNING:root:suspended generator _run_to_list(query.py:1043) raised EncodeError(Message apphosting_datastore_v3_bytes.Query.Filter is missing required fields: property[0].value)             
WARNING:root:suspended generator _get_async(query.py:1314) raised EncodeError(Message apphosting_datastore_v3_bytes.Query.Filter is missing required fields: property[0].value)               
Traceback (most recent call last):                                                                                                                                                            
  Cell In [15], line 1                                                                                                                                                                        
    Foo.query(projection=[Foo.b]).get()                                                                                                                                                       
  File .../google/appengine/ext/ndb/query.py:1301 in get                                                                                                         
    return self.get_async(**q_options).get_result()                                                                                                                                           
  File .../google/appengine/ext/ndb/tasklets.py:397 in get_result                                                                                                
    self.check_success()
  File .../google/appengine/ext/ndb/tasklets.py:394 in check_success
    six.reraise(self._exception.__class__, self._exception, self._traceback)
  File .../six.py:719 in reraise
    raise value
  File .../google/appengine/ext/ndb/tasklets.py:441 in _help_tasklet_along
    value = gen.throw(exc.__class__, exc, tb)
  File .../google/appengine/ext/ndb/query.py:1314 in _get_async
    res = yield self.fetch_async(1, **q_options)
  File .../google/appengine/ext/ndb/tasklets.py:441 in _help_tasklet_along
    value = gen.throw(exc.__class__, exc, tb)
  File .../google/appengine/ext/ndb/query.py:1043 in _run_to_list
    batch = yield rpc
  File .../google/appengine/ext/ndb/tasklets.py:527 in _on_rpc_completion
    result = rpc.get_result()
  File .../google/appengine/api/apiproxy_stub_map.py:648 in get_result
    return self.__get_result_hook(self)
  File .../google/appengine/datastore/datastore_query.py:2949 in __query_result_hook
    self._batch_shared.conn.check_rpc_success(rpc)
  File .../google/appengine/datastore/datastore_rpc.py:1365 in check_rpc_success
    rpc.check_success()
  File .../google/appengine/api/apiproxy_stub_map.py:614 in check_success
    self.__rpc.CheckSuccess()
  File .../google/appengine/api/apiproxy_rpc.py:149 in CheckSuccess
    raise self.exception
  File .../google/appengine/api/apiproxy_rpc.py:212 in _CaptureTrace
    f()
  File .../google/appengine/api/apiproxy_rpc.py:207 in _SendRequest
    self.stub.MakeSyncCall(self.package, self.call, self.request, self.response)
  File .../google/appengine/api/datastore_file_stub.py:593 in MakeSyncCall
    super(DatastoreFileStub, self).MakeSyncCall(service,
  File .../google/appengine/api/apiproxy_stub.py:143 in MakeSyncCall
    method(request, response)
  File .../google/appengine/datastore/datastore_stub_util.py:3072 in UpdateIndexesWrapper
    return func(self, *args, **kwargs)
  File .../google/appengine/datastore/datastore_stub_util.py:3362 in _Dynamic_RunQuery
    cursor = self._datastore.GetQueryCursor(query, self._trusted, self._app_id)
  File .../google/appengine/datastore/datastore_stub_util.py:2639 in GetQueryCursor
      return self._GetQueryCursor(raw_query, filters, orders, index_list)
  File .../google/appengine/api/datastore_file_stub.py:696 in _GetQueryCursor
    return datastore_stub_util._ExecuteQuery(results, query, filters, orders,
  File .../google/appengine/datastore/datastore_stub_util.py:5129 in _ExecuteQuery
    dsquery = _MakeQuery(query, filters, orders)
  File .../google/appengine/datastore/datastore_stub_util.py:5022 in _MakeQuery
    clone_pb.filter.extend(filters)
EncodeError: Message apphosting_datastore_v3_bytes.Query.Filter is missing required fields: property[0].value

Steps to Reproduce the Problem

Simple test case:

>>> from google.appengine.ext import testbed                                                                                                                                                  
>>> tb = testbed.Testbed()                                                                                                                                                                    
>>> tb.activate()                              
>>> tb.init_datastore_v3_stub()                
>>> tb.init_memcache_stub()                    
>>> from google.appengine.ext import ndb       
>>> class Foo(ndb.Model):                      
...     a = ndb.StringProperty(indexed=True)                                                   
...     b = ndb.StringProperty(indexed=True)                                                   
...                                            
>>> foo = Foo(a='a', b='b')                    
>>> foo.put()                                  
Key('Foo', 1)                                  
>>> Foo.query(projection=[Foo.b]).get()

Analysis

It seems the protobuf protocol for datastore (are these protocols documented anywhere?) expects properties in a filter to have a value, even if it's an empty value (in the case of EXISTS filters, which are created for projections).

In this version of the library, when building a query filter, the value field of projected properties are cleared here.

Whereas on the same line, in the old Python 2 app engine code, it calls:

new_prop.mutable_value()

Why the difference, I don't know. Is it a difference in the v4 protocol? Or just an oversight?

Specifications

  • Version: appengine-python-standard==1.0.0
  • Platform: Python 3.9

`StringProperty` value of `StructuredProperty` is coerced to `bytes` during `choices` validation โ†’ BadValueError

Expected Behavior

Given the following example ndb models and code:

from google.appengine.ext import ndb

RED = "red"
GREEN = "green"
BLUE = "blue"

COLORS = [RED, GREEN, BLUE]

class Car(ndb.Model):
    name = ndb.StringProperty(required=True)
    color = ndb.StringProperty(choices=COLORS, required=True)

class Dealership(ndb.Model):
    cars = ndb.StructuredProperty(Car, repeated=True)

d1 = Dealership(cars=[Car(name="Ferrari", color=RED)]).put().get()
print(d1)

d2 = Dealership.query(Dealership.cars == Car(color=RED)).get()
print(d2)

print(str(d1 == d2))
assert d1 == d2

...we should see:

Dealership(key=Key('Dealership', 1), cars=[Car(color='red', name='Ferrari')])
Dealership(key=Key('Dealership', 1), cars=[Car(color='red', name='Ferrari')])
True

...and there are no errors. This works fine in Python 2 with the Google Cloud SDK. ๐Ÿ‘

Actual Behavior

Using Python 3.9.16 and appengine-python-standard, however, we see:

BadValueError: Value b'red' for property b'cars.color' is not an allowed choice`

...which is confusing because 'red' was provided as the value, not b'red'. ๐Ÿค”๐Ÿค”๐Ÿค”

Digging around a bit, it appears to fail in StructuredProperty._comparison() (triggered by the Dealership.cars == Car(color=RED) expression), specifically right here:

for prop in six.itervalues(self._modelclass._properties):
vals = prop._get_base_value_unwrapped_as_list(value)

...because the provided color value is converted to a _BaseValue() of bytes via Property._get_base_value_unwrapped_as_list() โ†’ Property._get_base_value() โ†’ Property._opt_call_to_base_type() โ†’ Property._apply_to_values() โ†’ TextProperty._to_base_type():

def _to_base_type(self, value):
if isinstance(value, six.text_type):
return value.encode('utf-8', 'surrogatepass')

...before being compared to the allowed choices (which are str). ๐Ÿ˜ž

A workaround is to adjust the values in choices to be of type bytes:

RED = b"red"
GREEN = b"green"
BLUE = b"blue"

...and we then see the same result:

Dealership(key=Key('Dealership', 1), cars=[Car(color='red', name='Ferrari')])
Dealership(key=Key('Dealership', 1), cars=[Car(color='red', name='Ferrari')])
True

(This also works fine in Python 2 with the Google Cloud SDK. ๐Ÿ˜…)

Steps to Reproduce the Problem

(see code above)

Specifications

  • Version: appengine-python-standard 1.1.3, Python 3.9.16
  • Platform: MacOS (Darwin Kernel Version 23.0.0: Fri Sep 15 14:42:42 PDT 2023; root:xnu-10002.1.13~1/RELEASE_X86_64 x86_64)

Additional Info

This does not appear to have anything to do with persisting data -- the persisted value of Dealership.cars[].color remains a str: type(d1.cars[0].color) == str.

It only happens during "comparison" (a key part of querying) when the model with a StringProperty(choices=...) is a StructuredProperty of another model. . We can see this by skipping entity creation and just calling Dealership.query(Dealership.cars == Car(color=BLUE)), or even Dealership.cars == Car(color=BLUE) as the most minimal case.

When using a standalone model, all works fine:

ferrari = Car(name="Ferrari", color=RED).put().get()
print(ferrari)

red_car = Car.query(Car.color == RED).get()
print(red_car)

print(str(ferrari == red_car))
assert ferrari == red_car

...yields:

Car(key=Key('Car', 1), color='red', name='Ferrari')
Car(key=Key('Car', 1), color='red', name='Ferrari')
True

UnicodeDecodeError when reading Python 2 objects from memcache

Expected Behavior

memcache.get() returns an object from memcache, even if it was added to memcache using the python27 runtime.

Actual Behavior

Traceback (most recent call last):
  ... <truncated> ...
  File "/srv/services/secrets_svc.py", line 59, in GetSecrets
    secrets = memcache.get(GLOBAL_KEY)
  File "/layers/google.python.pip/pip/lib/python3.9/site-packages/google/appengine/api/memcache/__init__.py", line 583, in get
    results = rpc.get_result()
  File "/layers/google.python.pip/pip/lib/python3.9/site-packages/google/appengine/api/apiproxy_stub_map.py", line 648, in get_result
    return self.__get_result_hook(self)
  File "/layers/google.python.pip/pip/lib/python3.9/site-packages/google/appengine/api/memcache/__init__.py", line 652, in __get_hook
    value = _decode_value(returned_item.value,
  File "/layers/google.python.pip/pip/lib/python3.9/site-packages/google/appengine/api/memcache/__init__.py", line 289, in _decode_value
    return do_unpickle(value)
  File "/layers/google.python.pip/pip/lib/python3.9/site-packages/google/appengine/api/memcache/__init__.py", line 425, in _do_unpickle
    return unpickler.load()
UnicodeDecodeError: 'ascii' codec can't decode byte 0x93 in position 55: ordinal not in range(128)

Steps to Reproduce the Problem

  1. Call memcache.set() on an ndb.Model object in python27.
  2. Call memcache.get() to get the object in python39.

This reproduces even with MEMCACHE_USE_CROSS_COMPATIBLE_PROTOCOL set in app.yaml. This sets the pickling protocol to 2, which only affects pickling and not unpickling, since the protocol is autodetected upon unpickling.

env_variables:
  MEMCACHE_USE_CROSS_COMPATIBLE_PROTOCOL: "2"

Workaround

There is a workaround by setting the encoding to 'bytes' in the memcache unpickler. It must be 'bytes' instead of 'latin1' because the ndb.Model deserializer expects a bytes object.

import functools
import six

from google.appengine.api import memcache

unpickler = functools.partial(six.moves.cPickle.Unpickler, encoding='bytes')
memcache.setup_client(memcache.Client(unpickler=unpickler))

The Python docs and bug tracker indicate that the encoding argument of pickle.Unpickler() is used to help with the awkwardness in differences in str between Python 2 and 3.
https://docs.python.org/3/library/pickle.html#pickle.Unpickler
https://bugs.python.org/issue22005
https://stackoverflow.com/questions/28218466/unpickling-a-python-2-object-with-python-3/28218598#28218598

Specifications

  • Version: appengine-python-standard==0.3.1 and appengine-python-standard==1.0.0
  • Platform: python39

Elevated latency, instance count and new errors/warnings compared to Python 2 runtime

We are using F1 instances (2 cores). In Python 3, with no meaningful code changes, we are seeing 1.5-2x median latency across the app, an even greater increase in created/active/billed instances with the same appengine settings, and some new errors and warnings we never saw before. For example, we see this a lot now with datastore operations:

...
  File "/layers/google.python.pip/pip/lib/python3.11/site-packages/google/appengine/datastore/datastore_query.py", line 2949, in __query_result_hook
    self._batch_shared.conn.check_rpc_success(rpc)
  File "/layers/google.python.pip/pip/lib/python3.11/site-packages/google/appengine/datastore/datastore_rpc.py", line 1365, in check_rpc_success
    rpc.check_success()
  File "/layers/google.python.pip/pip/lib/python3.11/site-packages/google/appengine/api/apiproxy_stub_map.py", line 614, in check_success
    self.__rpc.CheckSuccess()
  File "/layers/google.python.pip/pip/lib/python3.11/site-packages/google/appengine/api/apiproxy_rpc.py", line 149, in CheckSuccess
    raise self.exception
  File "/layers/google.python.pip/pip/lib/python3.11/site-packages/google/appengine/runtime/default_api_stub.py", line 276, in _CaptureTrace
    f(**kwargs)
  File "/layers/google.python.pip/pip/lib/python3.11/site-packages/google/appengine/runtime/default_api_stub.py", line 261, in _SendRequest
    raise self._ErrorException(*_DEFAULT_EXCEPTION)
google.appengine.runtime.apiproxy_errors.RPCFailedError: The remote RPC to the application server failed for call datastore_v3.RunQuery().

^^ (similar traces also with datastore_v3.Get()

We also see tons of these warnings, not related to any outgoing web calls our app is making: (I know it's just a warning and might be harmless, but it's worrying because of the extremely high volume in our logs):

Connection pool is full, discarding connection: appengine.googleapis.internal. Connection pool size: 10

I've been tweaking the app.yaml various ways, but I can't seem to find a configuration that solves or even significantly reduces either issue. Our app is doing the exact same operations as before with the same load. What can we try? This isn't sustainable, we can't afford this jump in cost and the app is performing much worse now. Our unit tests are running in about 30% less time compared to Python 2 so I didn't expect this, it seems like it could be I/O related?

Not sure if this is relevant, but I noticed this in google.appengine.api.apiproxy_rpc.py:

_MAX_CONCURRENT_API_CALLS = 100

_THREAD_POOL = futures.ThreadPoolExecutor(_MAX_CONCURRENT_API_CALLS)

But then in concurrent.futures.threads.py we see this comment in the constructor:

class ThreadPoolExecutor(_base.Executor):

    # Used to assign unique thread names when thread_name_prefix is not supplied.
    _counter = itertools.count().__next__

    def __init__(self, max_workers=None, thread_name_prefix='',
                 initializer=None, initargs=()):
        """Initializes a new ThreadPoolExecutor instance.

        Args:
            max_workers: The maximum number of threads that can be used to
                execute the given calls.
            thread_name_prefix: An optional name prefix to give our threads.
            initializer: A callable used to initialize worker threads.
            initargs: A tuple of arguments to pass to the initializer.
        """
        if max_workers is None:
            # ThreadPoolExecutor is often used to:
            # * CPU bound task which releases GIL
            # * I/O bound task (which releases GIL, of course)
            #
            # We use cpu_count + 4 for both types of tasks.
            # But we limit it to 32 to avoid consuming surprisingly large resource
            # on many core machine.
            max_workers = min(32, (os.cpu_count() or 1) + 4)

So the default number of workers would be 6, but apiproxy_rpc.py is setting it to 100 regardless of the actual number of CPU cores available. And, this default has a hardcoded limit of 32, because of reported "consuming surprisingly large resource." What led to the decision to use 100 here? (Edit: is this maybe OK because RPC calls aren't CPU-bound? Should the urllib3 connection pool size be increased to match this?)

datastore_stub_util fails in pytest with AssertionError

Expected Behavior

The tests should have passed.

From our own investigation, the issue is not related to the datastore stub in general, but when we test a flask view in which there is interaction with the datastore service.

The issue appears when we use:

  • GAE 2nd gen
  • Python 3
  • Flask
  • appengine-python-standard
  • pytest

Actual Behavior

../../../miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/datastore/datastore_stub_util.py:469: AssertionError
-------------------------------------------------------------------------------------------------------------- Captured log call --------------------------------------------------------------------------------------------------------------
WARNING root:tasklets.py:482 suspended generator _put_tasklet(context.py:382) raised AssertionError()
WARNING root:tasklets.py:482 suspended generator put(context.py:850) raised AssertionError()
ERROR root:middlewares.py:155 Traceback (most recent call last):
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/runtime/middlewares.py", line 140, in ErrorLoggingMiddleware
return app(wsgi_env, start_response)
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/runtime/middlewares.py", line 82, in
lambda app: lambda wsgi_env, start_resp: f(app, wsgi_env, start_resp),
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/runtime/middlewares.py", line 378, in BackgroundAndShutdownMiddleware
return app(wsgi_env, start_response)
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/runtime/middlewares.py", line 82, in
lambda app: lambda wsgi_env, start_resp: f(app, wsgi_env, start_resp),
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/runtime/middlewares.py", line 405, in SetNamespaceFromHeader
return app(wsgi_env, start_response)
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/flask/app.py", line 2080, in wsgi_app
response = self.handle_exception(e)
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/flask/app.py", line 2077, in wsgi_app
response = self.full_dispatch_request()
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/flask/app.py", line 1525, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/flask/app.py", line 1523, in full_dispatch_request
rv = self.dispatch_request()
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/flask/app.py", line 1509, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
File "/my_home_dir/Downloads/migrate-python2-appengine-master/mod1b-flask/main.py", line 39, in root
store_visit(request.remote_addr, request.user_agent)
File "/my_home_dir/Downloads/migrate-python2-appengine-master/mod1b-flask/main.py", line 30, in store_visit
Visit(visitor='{}: {}'.format(remote_addr, user_agent)).put()
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/ext/ndb/model.py", line 3538, in _put
return self._put_async(**ctx_options).get_result()
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/ext/ndb/tasklets.py", line 397, in get_result
self.check_success()
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/ext/ndb/tasklets.py", line 394, in check_success
six.reraise(self._exception.class, self._exception, self._traceback)
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/six.py", line 719, in reraise
raise value
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/ext/ndb/tasklets.py", line 441, in _help_tasklet_along
value = gen.throw(exc.class, exc, tb)
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/ext/ndb/context.py", line 850, in put
key = yield self._put_batcher.add(entity, options)
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/ext/ndb/tasklets.py", line 441, in _help_tasklet_along
value = gen.throw(exc.class, exc, tb)
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/ext/ndb/context.py", line 382, in _put_tasklet
keys = yield self._conn.async_put(options, datastore_entities)
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/ext/ndb/tasklets.py", line 527, in _on_rpc_completion
result = rpc.get_result()
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/api/apiproxy_stub_map.py", line 648, in get_result
return self.__get_result_hook(self)
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/datastore/datastore_rpc.py", line 1875, in __put_hook
self.check_rpc_success(rpc)
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/datastore/datastore_rpc.py", line 1365, in check_rpc_success
rpc.check_success()
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/api/apiproxy_stub_map.py", line 614, in check_success
self.__rpc.CheckSuccess()
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/api/apiproxy_rpc.py", line 149, in CheckSuccess
raise self.exception
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/api/apiproxy_rpc.py", line 212, in _CaptureTrace
f()
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/api/apiproxy_rpc.py", line 207, in _SendRequest
self.stub.MakeSyncCall(self.package, self.call, self.request, self.response)
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/api/datastore_file_stub.py", line 593, in MakeSyncCall
super(DatastoreFileStub, self).MakeSyncCall(service,
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/api/apiproxy_stub.py", line 143, in MakeSyncCall
method(request, response)
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/datastore/datastore_stub_util.py", line 3344, in _Dynamic_Put
results = self._datastore.Put(req.entity, res.cost, transaction,
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/datastore/datastore_stub_util.py", line 2772, in Put
CheckEntity(trusted, calling_app, raw_entity)
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/datastore/datastore_stub_util.py", line 522, in CheckEntity
CheckReference(request_trusted, request_app_id, entity.key, False)
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/datastore/datastore_stub_util.py", line 494, in CheckReference
CheckAppId(request_trusted, request_app_id, key.app)
File "/my_home_dir/miniconda3/envs/testndb/lib/python3.9/site-packages/google/appengine/datastore/datastore_stub_util.py", line 469, in CheckAppId
assert app_id
AssertionError
=========================================================================================================== short test summary info ===========================================================================================================
FAILED main_test.py::test_index - AssertionError
============================================================================================================== 1 failed in 0.78s ==============================================================================================================

Steps to Reproduce the Problem

  1. Clone https://github.com/googlecodelabs/migrate-python2-appengine
  2. Go to mod1b-flask
  3. python3 -m venv env
  4. source env/bin/activate
  5. pip install -r requirements.txt
  6. pip install pytest
  7. Create new file test_main.py with the following content:
import pytest
import requests
import main

@pytest.fixture
def testbed():
    from google.appengine.ext import testbed
    from google.appengine.datastore import datastore_stub_util

    testbed = testbed.Testbed()
    testbed.activate()
    testbed.init_datastore_v3_stub(
        consistency_policy=datastore_stub_util.PseudoRandomHRConsistencyPolicy(probability=1)
    )
    testbed.init_memcache_stub()
    testbed.init_app_identity_stub()

    yield testbed

    testbed.deactivate()

@pytest.fixture
def client():
    main.app.testing = True
    client = main.app.test_client()
    reset_datastore()  # clean up before every test
    yield client

def reset_datastore():
    # clean up/delete the database (reset Datastore)
    response = requests.post("http://localhost:8081/reset")
    assert response.status_code == 200

def test_index(testbed, client):
    r = client.get('/')
    assert r.status_code == 200
  1. gcloud beta emulators datastore start --project=test-project --host-port localhost:8081 --no-store-on-disk
  2. pytest . -p no:warnings

Specifications

The latest versions of the gcloud sdk and respective libraries are installed.

The packages installed in the virtualenv are presented below:

appengine-python-standard 1.0.0
attrs 21.4.0
cachetools 5.2.0
certifi 2022.6.15
charset-normalizer 2.1.0
click 8.1.3
Flask 2.1.2
frozendict 2.3.2
google-auth 2.9.0
idna 3.3
importlib-metadata 4.12.0
itsdangerous 2.1.2
Jinja2 3.1.2
MarkupSafe 2.1.1
mock 4.0.3
Pillow 9.2.0
pip 22.0.4
protobuf 4.21.2
pyasn1 0.4.8
pyasn1-modules 0.2.8
pytz 2022.1
requests 2.28.1
rsa 4.8
ruamel.yaml 0.17.21
ruamel.yaml.clib 0.2.6
setuptools 58.1.0
six 1.16.0
urllib3 1.26.10
Werkzeug 2.1.2
zipp 3.8.0

NDB_PY2_UNPICKLE_COMPAT?

There is an env var being checked in a couple places that I don't see mentioned outside the code:

class PickleProperty(BlobProperty):
...
  def _from_base_type(self, value):
    try:
      return pickle.loads(value)
    except UnicodeDecodeError:
      if int(os.environ.get('NDB_PY2_UNPICKLE_COMPAT', '0')):
        return pickle.loads(value, encoding='bytes')
      raise
class Key(object):
...
  def __new__(cls, *_args, **kwargs):
  ...
    if int(os.environ.get('NDB_PY2_UNPICKLE_COMPAT', '0')):
      kwargs = {six.ensure_str(k): v for (k, v) in kwargs.items()}

Should we set this, is it set for us, should we avoid it?
I'm running into some assorted pickle-related issues which had me looking at this (not-so-)recent change, and it's a bit tricky to figure out if it's better to try more "global" sorts of fixes or just site-specific workarounds to avoid dealing with messier problems.

ndb.PickleProperty UnicodeDecodeError: 'ascii' codec can't decode byte with NDB_USE_CROSS_COMPATIBLE_PICKLE_PROTOCOL

Expected Behavior

No decode issues

Actual Behavior

Decode issues

Steps to Reproduce the Problem

  1. set NDB_USE_CROSS_COMPATIBLE_PICKLE_PROTOCOL:True in app.yaml
  2. Have a ndb.PickleProperty in python27
  3. Access the ndb.PickleProperty in python312
  4. File "/layers/google.python.pip/pip/lib/python3.12/site-packages/google/appengine/ext/ndb/model.py", line 1913, in _from_base_type UnicodeDecodeError: 'ascii' codec can't decode byte 0xe6 in position 1: ordinal not in range(128)

Specifications

  • Version: latest
  • Platform: python312

Reporting an issue with the legacy Python 2.7 runtime files?

I don't think this is necessarily the right place to report this issue, but I don't know where else to report a bug in the AppEngine standard files installed through the GCloud SDK. I have been talking with GCP support for a week, but they have not yet figured out who to forward the problem to.

It also seems like this issue is related to the work being done in this project.

This issue is about the SDK for developing Python 2.7 AppEngine Standard projects. My team is well aware that Python 2.7 has reached EOL and we are working to migrate to Python 3, but GCP continues to support the legacy Python 2.7 runtime.

Expected Behavior

The Google Cloud SDK, v358.0.0, contains an update to the app-engine-python and app-engine-python-extras modules that breaks the testbed/api_server functionality provided with the legacy AppEngine runtime for Python 2.7.

The following works with Google Cloud SDK v357.0.0, which contains app-engine-python and app-engine-python-extras v1.9.93, but it raises an error with GCloud SDK v358.0.0 and v359, which contains v1.9.94 of the AppEngine components.

Under Python 2.7, the following sample code should run without raising any exceptions:

import sys
PATH_TO_GOOGLE_CLOUD_SDK = "/Users/wsorvis/google-cloud-sdk" # You will have to update this path
APP_ENGINE_RUNTIME = PATH_TO_GOOGLE_CLOUD_SDK + "/platform/google_appengine"
sys.path.insert(0, APP_ENGINE_RUNTIME)
import dev_appserver
dev_appserver.fix_sys_path()

from google.appengine.ext import testbed
mytestbed = testbed.Testbed()
mytestbed.activate(use_datastore_emulator=True)
mytestbed.init_datastore_v3_stub()

Again, this works as expected with Google Clou dSDK v358 (and v1.9.93 of the old AppEngine SDK files)

Actual Behavior

However, after updating to Google Cloud SDK v358, it raises the following exception:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/wsorvis/google-cloud-sdk/platform/google_appengine/google/appengine/ext/testbed/__init__.py", line 640, in init_datastore_v3_stub
    delegate_stub.Clear()
  File "/Users/wsorvis/google-cloud-sdk/platform/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 670, in Clear
    self._server.Send('/clear?service=datastore_v3')
  File "/Users/wsorvis/google-cloud-sdk/platform/google_appengine/google/appengine/tools/appengine_rpc.py", line 486, in Send
    f = self.opener.open(req)
  File "/Users/wsorvis/.pyenv/versions/2.7.18/lib/python2.7/urllib2.py", line 435, in open
    response = meth(req, response)
  File "/Users/wsorvis/.pyenv/versions/2.7.18/lib/python2.7/urllib2.py", line 548, in http_response
    'http', request, response, code, msg, hdrs)
  File "/Users/wsorvis/.pyenv/versions/2.7.18/lib/python2.7/urllib2.py", line 473, in error
    return self._call_chain(*args)
  File "/Users/wsorvis/.pyenv/versions/2.7.18/lib/python2.7/urllib2.py", line 407, in _call_chain
    result = func(*args)
  File "/Users/wsorvis/.pyenv/versions/2.7.18/lib/python2.7/urllib2.py", line 556, in http_error_default
    raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 500: Internal Server Error

Steps to Reproduce the Problem

  1. Install Google Cloud SDK v357, and both of app-engine-python and app-engine-python-extras components
  2. Run the sample test code above with Python 2.7, observe that it does not raise an exception and it returns a 0 status code
  3. Update to Google Cloud SDK v358 (and ensure that the appengine components update to 1.9.94)
  4. Run the sample test code again, observe the exception it throws
  5. Try again with Google Cloud SDK v359, observe the same error (it also contains v1.9.94 of the AppEngine components)

Specifications

  • Version: See discussion above
  • Platform: I am running the test suite on MacOS 11 11.5 and 11.6

Root Cause

The testbed runs a sub-process that implements/wraps API calls to the Datastore Emulator, launched as api_server.py. The 500 error above occurs when the main process attempts to send a request to the sub-process.

By patching the code that launches the api_server.py within the testbed's code to print STDOUT and STDERR from the api_server.py when it exits, I was able to detect the following stack trace that likely caused the 500 response:

Traceback (most recent call last):
  File "/Users/wsorvis/google-cloud-sdk/platform/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 1302, in communicate
    req.respond()
  File "/Users/wsorvis/google-cloud-sdk/platform/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 831, in respond
    self.server.gateway(self).respond()
  File "/Users/wsorvis/google-cloud-sdk/platform/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 2115, in respond
    response = self.req.server.wsgi_app(self.env, self.start_response)
  File "/Users/wsorvis/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 302, in __call__
    return app(environ, start_response)
  File "/Users/wsorvis/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 342, in __call__
    return self.app(environ, start_response)
  File "<string>", line 547, in __call__
  File "<string>", line 521, in _handle_CLEAR
  File "/Users/wsorvis/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/datastore_grpc_stub.py", line 126, in Clear
    response = six.moves.urllib.urlopen(
AttributeError: 'Module_six_moves_urllib' object has no attribute 'urlopen'

The following snippet from google/appengine/tools/devappserver2/datastore_grpc_stub.py shows where the error occurred:

  def Clear(self):
    # api_server.py has _handle_CLEAR() method which requires this interface for
    # reusing api_server between unittests.
    response = six.moves.urllib.urlopen(
        six.moves.urllib.Request(
            'http://%s/reset' % self.grpc_apiserver_host, data=''))
    if response.code != six.moves.http_client.OK:
      raise IOError('The Cloud Datastore emulator did not reset successfully.')

six.moves.urllib.urlopen and six.moves.urllib.Request are not valid calls to six, which caused the 500 error. They should be six.moves.urllib.request.urlopen and six.moves.urllib.request.Request.

By grepping through the legacy runtime's platform files in Google Cloud SDK v358, I spotted a few other cases where it looks like six is not being used correctly:

bad_six_references

Workaround

For now, we are working around this by pinning to Google Cloud SDK v357.0 when we run tests or run the application locally and in CI.

`deferred` "Attempted RPC call without active security ticket" error

Expected Behavior

Enqueuing a deferred task

Actual Behavior

Attempting to enqueue a deferred task throws an error due to a missing security ticket.

Stack trace:

Traceback (most recent call last):
  File "/layers/google.python.pip/pip/lib/python3.8/site-packages/flask/app.py", line 2073, in wsgi_app
    response = self.full_dispatch_request()
  File "/layers/google.python.pip/pip/lib/python3.8/site-packages/flask/app.py", line 1518, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/layers/google.python.pip/pip/lib/python3.8/site-packages/flask/app.py", line 1516, in full_dispatch_request
    rv = self.dispatch_request()
  File "/layers/google.python.pip/pip/lib/python3.8/site-packages/flask/app.py", line 1502, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
  File "/workspace/backend/tasks_io/main.py", line 24, in test
    deferred.defer(do_the_thing)
  File "/layers/google.python.pip/pip/lib/python3.8/site-packages/google/appengine/ext/deferred/deferred.py", line 305, in defer
    return task.add(queue, transactional=transactional)
  File "/layers/google.python.pip/pip/lib/python3.8/site-packages/google/appengine/api/taskqueue/taskqueue.py", line 1292, in add
    return self.add_async(queue_name, transactional).get_result()
  File "/layers/google.python.pip/pip/lib/python3.8/site-packages/google/appengine/api/taskqueue/taskqueue.py", line 1288, in add_async
    return Queue(queue_name).add_async(self, transactional, rpc)
  File "/layers/google.python.pip/pip/lib/python3.8/site-packages/google/appengine/api/taskqueue/taskqueue.py", line 2139, in add_async
    return self.__AddTasks(tasks,
  File "/layers/google.python.pip/pip/lib/python3.8/site-packages/google/appengine/api/taskqueue/taskqueue.py", line 2278, in __AddTasks
    return _MakeAsyncCall('BulkAdd',
  File "/layers/google.python.pip/pip/lib/python3.8/site-packages/google/appengine/api/taskqueue/taskqueue.py", line 488, in _MakeAsyncCall
    rpc.make_call(method, request, response, get_result_hook, None)
  File "/layers/google.python.pip/pip/lib/python3.8/site-packages/google/appengine/api/apiproxy_stub_map.py", line 565, in make_call
    self.__rpc.MakeCall(self.__service, method, request, response)
  File "/layers/google.python.pip/pip/lib/python3.8/site-packages/google/appengine/api/apiproxy_rpc.py", line 133, in MakeCall
    self._MakeCallImpl()
  File "/layers/google.python.pip/pip/lib/python3.8/site-packages/google/appengine/runtime/default_api_stub.py", line 173, in _MakeCallImpl
    raise apiproxy_errors.RPCFailedError(
google.appengine.runtime.apiproxy_errors.RPCFailedError: Attempted RPC call without active security ticket

Steps to Reproduce the Problem

  1. Create a base Flask app with use_deferred=True

main.py

from flask import Flask
from google.appengine.api import wrap_wsgi_app

app = Flask(__name__)
app.wsgi_app = wrap_wsgi_app(app.wsgi_app, use_legacy_context_mode=False, use_deferred=True)


def do_the_thing():
    print(f"We did the thing")


@app.route('/')
def index():
    from google.appengine.ext import deferred

    deferred.defer(do_the_thing)

app.yaml

runtime: python38
entrypoint: gunicorn -b :$PORT main:app
app_engine_apis: true

handlers:
  - url: /.*
    script: auto
  1. Run the app
$ dev_appserver.py --runtime_python_path=$(which python3) app.yaml
  1. Navigate to http://localhost:8080/ in a browser - observe the error

Specifications

  • Version: v0.3.1
  • Platform: Ubuntu 20.04.3 (also replicated on an upstream GAE deploy)

NDB transaction() read_only property does not work

Expected Behavior

The NDB transaction() function has a read_only property to indicate that the transaction will only perform entity reads, potentially improving throughput. I expect that it will start a read only transaction when I invoke it as such: ndb.transaction(txn, read_only=True).

Actual Behavior

Sadly, this flag does not work in the current SDK. Instead, it fails with an TypeError: Unknown configuration option ('read_only') error when you specify it (regardless whether it is True of False).

Full stack trace:

Traceback (most recent call last):
  File "/Users/tijmen/dev/ndb-read-only/readonly_test.py", line 22, in test_readonly_transaction
    ndb.transaction(txn, read_only=False)  # Does not work
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tijmen/dev/ndb-read-only/lib/google/appengine/ext/ndb/utils.py", line 182, in positional_wrapper
    return wrapped(*args, **kwds)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tijmen/dev/ndb-read-only/lib/google/appengine/ext/ndb/model.py", line 3902, in transaction
    return fut.get_result()
           ^^^^^^^^^^^^^^^^
  File "/Users/tijmen/dev/ndb-read-only/lib/google/appengine/ext/ndb/tasklets.py", line 397, in get_result
    self.check_success()
  File "/Users/tijmen/dev/ndb-read-only/lib/google/appengine/ext/ndb/tasklets.py", line 394, in check_success
    six.reraise(self._exception.__class__, self._exception, self._traceback)
  File "/Users/tijmen/dev/ndb-read-only/lib/six.py", line 719, in reraise
    raise value
  File "/Users/tijmen/dev/ndb-read-only/lib/google/appengine/ext/ndb/tasklets.py", line 444, in _help_tasklet_along
    value = gen.send(val)
            ^^^^^^^^^^^^^
  File "/Users/tijmen/dev/ndb-read-only/lib/google/appengine/ext/ndb/context.py", line 976, in transaction
    options = _make_ctx_options(ctx_options, TransactionOptions)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tijmen/dev/ndb-read-only/lib/google/appengine/ext/ndb/context.py", line 145, in _make_ctx_options
    return config_cls(**ctx_options)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tijmen/dev/ndb-read-only/lib/google/appengine/datastore/datastore_rpc.py", line 392, in __new__
    raise TypeError('Unknown configuration option (%s)' % err)
TypeError: Unknown configuration option ('read_only')

Steps to Reproduce the Problem

Run the following test:

import unittest
from google.appengine.ext import testbed
from google.appengine.ext import ndb

class ExampleModel(ndb.Model):
    pass

class LoggingTest(unittest.TestCase):
    def setUp(self):
        self.testbed = testbed.Testbed()
        self.testbed.setup_env()
        self.testbed.activate()
        self.testbed.init_memcache_stub()
        self.testbed.init_datastore_v3_stub()

    def tearDown(self):
        self.testbed.deactivate()

    def test_readonly_transaction(self):
        def txn():
            model = ExampleModel.get_by_id("test")
        ndb.transaction(txn, read_only=True)  # Does not work

    def test_normal_transaction(self):
        def txn():
            model = ExampleModel.get_by_id("test")
        ndb.transaction(txn)  # Works.


if __name__ == '__main__':
    unittest.main()

Specifications

Version: appengine-python-standard 1.1.6

`deferred` compatibility with `dev_appserver`

I think #31 and #42 may not go far enough.

I encountered an issue with a similar trace to the above, even after making sure DEFERRED_USE_CROSS_COMPATIBLE_PICKLE_PROTOCOL was set:

Exception in thread Thread-1:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.7/threading.py", line 754, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/usr/lib/google-cloud-sdk/platform/google_appengine/google/appengine/api/taskqueue/taskqueue_stub.py", line 2192, in MainLoop
    self._ProcessQueues()
  File "/usr/lib/google-cloud-sdk/platform/google_appengine/google/appengine/api/taskqueue/taskqueue_stub.py", line 2137, in _ProcessQueues
    response_code = self.task_executor.ExecuteTask(task, queue)
  File "/usr/lib/google-cloud-sdk/platform/google_appengine/google/appengine/api/taskqueue/taskqueue_stub.py", line 2069, in ExecuteTask
    '0.1.0.2')
  File "/usr/lib/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/dispatcher.py", line 803, in add_request
    fake_login=fake_login)
  File "/usr/lib/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/module.py", line 1250, in build_request_environ
    'wsgi.input': six.StringIO(six.ensure_text(body))
  File "/usr/lib/google-cloud-sdk/platform/google_appengine/google/appengine/_internal/six/__init__.py", line 952, in ensure_text
    return s.decode(encoding, errors)
  File "/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode
    return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xc0 in position 529: invalid start byte

What dev_appserver does

The taskqueue stub has to re-marshall the payload back as an HTTP request to the local server, which then can un-pickle and process it. Here's the code (which remember, is running in python2 still)

# From /usr/lib/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/module.py
  def build_request_environ(self,
                            method,
                            relative_url,
                            headers,
                            body,
                            source_ip,
                            port,
                            fake_login=False):
    logging.warning("task body: {}".format(body))
    if isinstance(body, six.text_type):
      body = body.encode('ascii')

    url = six.moves.urllib.parse.urlsplit(relative_url)
    if port != 80:
      if ':' in self.host:
        host = '[%s]:%s' % (self.host, port)
      else:
        host = '%s:%s' % (self.host, port)
    else:
      host = self.host
    import base64
    environ = {
        constants.FAKE_IS_ADMIN_HEADER: '1',
        'CONTENT_LENGTH': str(len(body)),
        'PATH_INFO': url.path,
        'QUERY_STRING': url.query,
        'REQUEST_METHOD': method,
        'REMOTE_ADDR': source_ip,
        'SERVER_NAME': self.host,
        'SERVER_PORT': str(port),
        'SERVER_PROTOCOL': 'HTTP/1.1',
        'wsgi.version': (1, 0),
        'wsgi.url_scheme': 'http',
        'wsgi.errors': six.StringIO(),
        'wsgi.multithread': True,
        'wsgi.multiprocess': True,
        'wsgi.input': six.StringIO(six.ensure_text(body))
    }
    if fake_login:
      environ[constants.FAKE_LOGGED_IN_HEADER] = '1'
    util.put_headers_in_environ(headers, environ)
    environ['HTTP_HOST'] = host
    return environ

Why this can fail

Even when using pickle protocol zero, which DEFERRED_USE_CROSS_COMPATIBLE_PICKLE_PROTOCOL, the serialized contents can still contain non-utf characters

Example

Pickle protocol 0 doesn't actually guarantee to be 100% ascii characters, see https://bugs.python.org/issue38241

The easiest example is if an argument to the deferred function contains an ndb.Model instance with a ndb.StructuredProperty (who ends up getting serialized as a protobuf encoding, containing characters outside the unicode range). Even without the validation, this will run into trouble due to valid character ranges for http requests.

Python 3.10.0 (default, Oct  4 2021, 00:00:00) [GCC 11.2.1 20210728 (Red Hat 11.2.1-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from google.appengine.ext import ndb
>>> import pickle
>>> 
>>> class SubModel(ndb.Model):
...     prop = ndb.StringProperty()
... 
>>> class TestModel(ndb.Model):
...     sub_prop = ndb.StructuredProperty(SubModel)
... 
>>> m = TestModel(sub_prop=SubModel(prop="test"))
>>> pickled = pickle.dumps(m, 0)
>>> 
>>> import six
>>> six.ensure_text(pickled)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/phil/Documents/tba/the-blue-alliance-py3/venv/lib/python3.10/site-packages/six.py", line 951, in ensure_text
    return s.decode(encoding, errors)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x82 in position 181: invalid start byte

A Better Solution?

Instead of using protocol 0, I think we can instead use any python2-compatible protocol, but also base64 encode the pickled data so w can be absolutely sure the characters we pass on the wire are within the expected range.

This would break cases where code running this library enqueues a deferred task to run against the legacy builtin version, so this may not be fully acceptable?

Also, I can't fully figure out why the example in my app didn't run into this problem when running entirely under python2, the best I can figure is that the legacy runtime didn't perform that validation and without the python3 str/bytes differences in the mix, this may have just never come up?

GenericProperty (and therefore Expando) changes str values to bytes (sometimes)

Expected Behavior

If I set the value of a GenericProperty to a str, then read it back from datastore, the value should still always be a str. The value I read back should equal the value I put in.

Actual Behavior

If I set a GenericProperty to the string 'oops', then read it back from datastore, the value is now b'oops', so it is not equal to the value I put in the property.
If I set a GenericProperty to the string 'oรธps', then read it back from datastore, the value is still 'oรธps'. Apparently we are deciding to do different behavior depending on the contents of the string.

The same thing happens in appengine Python 2 incidentally, but it was not a big problem in practice because in Python 2, 'oops' == u'oops' was True. In Python 3 it is a problem :)

Steps to Reproduce the Problem

class Oops(ndb.Model):
    generic_prop = ndb.GenericProperty()
    string_prop = ndb.StringProperty()

oops = Oops(generic_prop="oรธps", string_prop="oรธps")
assert "oรธps" == oops.generic_prop == oops.string_prop
oops.put()
assert "oรธps" == oops.generic_prop == oops.string_prop
oops = oops.key.get()
assert "oรธps" == oops.generic_prop == oops.string_prop
oops = oops.key.get(use_cache=False)
assert oops.generic_prop == oops.string_prop  # <- Succeeds: both properties are still "oรธps"

oops = Oops(generic_prop="oops", string_prop="oops")
assert "oops" == oops.generic_prop == oops.string_prop
oops.put()
assert "oops" == oops.generic_prop == oops.string_prop
oops = oops.key.get()
assert "oops" == oops.generic_prop == oops.string_prop
oops = oops.key.get(use_cache=False)
assert oops.generic_prop == oops.string_prop  # <- Fails, because now generic_prop is b'oops'

Specifications

  • Version: appengine-python-standard 1.1.3, Python 3.11.4
  • Platform: MacOS

UnicodeDecodeError when producing query cursors on dev_appserver

Expected Behavior

No exception is thrown.

Actual Behavior

Traceback (most recent call last):
  File "<string>", line 6, in <module>
  File "/var/folders/yh/l2c69w5j6q71gzhvs454p7s80000gn/T/tmpxVxXxD/lib/python3.7/site-packages/google/appengine/ext/ndb/utils.py", line 182, in positional_wrapper
    return wrapped(*args, **kwds)
  File "/var/folders/yh/l2c69w5j6q71gzhvs454p7s80000gn/T/tmpxVxXxD/lib/python3.7/site-packages/google/appengine/ext/ndb/query.py", line 1266, in fetch
    return self.fetch_async(limit, **q_options).get_result()
  File "/var/folders/yh/l2c69w5j6q71gzhvs454p7s80000gn/T/tmpxVxXxD/lib/python3.7/site-packages/google/appengine/ext/ndb/tasklets.py", line 397, in get_result
    self.check_success()
  File "/var/folders/yh/l2c69w5j6q71gzhvs454p7s80000gn/T/tmpxVxXxD/lib/python3.7/site-packages/google/appengine/ext/ndb/tasklets.py", line 394, in check_success
    six.reraise(self._exception.__class__, self._exception, self._traceback)
  File "/var/folders/yh/l2c69w5j6q71gzhvs454p7s80000gn/T/tmpxVxXxD/lib/python3.7/site-packages/six.py", line 719, in reraise
    raise value
  File "/var/folders/yh/l2c69w5j6q71gzhvs454p7s80000gn/T/tmpxVxXxD/lib/python3.7/site-packages/google/appengine/ext/ndb/tasklets.py", line 441, in _help_tasklet_along
    value = gen.throw(exc.__class__, exc, tb)
  File "/var/folders/yh/l2c69w5j6q71gzhvs454p7s80000gn/T/tmpxVxXxD/lib/python3.7/site-packages/google/appengine/ext/ndb/query.py", line 1043, in _run_to_list
    batch = yield rpc
  File "/var/folders/yh/l2c69w5j6q71gzhvs454p7s80000gn/T/tmpxVxXxD/lib/python3.7/site-packages/google/appengine/ext/ndb/tasklets.py", line 527, in _on_rpc_completion
    result = rpc.get_result()
  File "/var/folders/yh/l2c69w5j6q71gzhvs454p7s80000gn/T/tmpxVxXxD/lib/python3.7/site-packages/google/appengine/api/apiproxy_stub_map.py", line 648, in get_result
    return self.__get_result_hook(self)
  File "/var/folders/yh/l2c69w5j6q71gzhvs454p7s80000gn/T/tmpxVxXxD/lib/python3.7/site-packages/google/appengine/datastore/datastore_query.py", line 2949, in __query_result_hook
    self._batch_shared.conn.check_rpc_success(rpc)
  File "/var/folders/yh/l2c69w5j6q71gzhvs454p7s80000gn/T/tmpxVxXxD/lib/python3.7/site-packages/google/appengine/datastore/datastore_rpc.py", line 1365, in check_rpc_success
    rpc.check_success()
  File "/var/folders/yh/l2c69w5j6q71gzhvs454p7s80000gn/T/tmpxVxXxD/lib/python3.7/site-packages/google/appengine/api/apiproxy_stub_map.py", line 614, in check_success
    self.__rpc.CheckSuccess()
  File "/var/folders/yh/l2c69w5j6q71gzhvs454p7s80000gn/T/tmpxVxXxD/lib/python3.7/site-packages/google/appengine/api/apiproxy_rpc.py", line 149, in CheckSuccess
    raise self.exception
  File "/var/folders/yh/l2c69w5j6q71gzhvs454p7s80000gn/T/tmpxVxXxD/lib/python3.7/site-packages/google/appengine/runtime/default_api_stub.py", line 266, in _CaptureTrace
    f(**kwargs)
  File "/var/folders/yh/l2c69w5j6q71gzhvs454p7s80000gn/T/tmpxVxXxD/lib/python3.7/site-packages/google/appengine/runtime/default_api_stub.py", line 262, in _SendRequest
    self.response.ParseFromString(parsed_response.response)
  File "/var/folders/yh/l2c69w5j6q71gzhvs454p7s80000gn/T/tmpxVxXxD/lib/python3.7/site-packages/google/protobuf/message.py", line 199, in ParseFromString
    return self.MergeFromString(serialized)
  File "/var/folders/yh/l2c69w5j6q71gzhvs454p7s80000gn/T/tmpxVxXxD/lib/python3.7/site-packages/google/protobuf/internal/python_message.py", line 1128, in MergeFromString
    if self._InternalParse(serialized, 0, length) != length:
  File "/var/folders/yh/l2c69w5j6q71gzhvs454p7s80000gn/T/tmpxVxXxD/lib/python3.7/site-packages/google/protobuf/internal/python_message.py", line 1195, in InternalParse
    pos = field_decoder(buffer, new_pos, end, self, field_dict)
  File "/var/folders/yh/l2c69w5j6q71gzhvs454p7s80000gn/T/tmpxVxXxD/lib/python3.7/site-packages/google/protobuf/internal/decoder.py", line 732, in DecodeField
    if value._InternalParse(buffer, pos, new_pos) != new_pos:
  File "/var/folders/yh/l2c69w5j6q71gzhvs454p7s80000gn/T/tmpxVxXxD/lib/python3.7/site-packages/google/protobuf/internal/python_message.py", line 1195, in InternalParse
    pos = field_decoder(buffer, new_pos, end, self, field_dict)
  File "/var/folders/yh/l2c69w5j6q71gzhvs454p7s80000gn/T/tmpxVxXxD/lib/python3.7/site-packages/google/protobuf/internal/decoder.py", line 681, in DecodeField
    pos = value._InternalParse(buffer, pos, end)
  File "/var/folders/yh/l2c69w5j6q71gzhvs454p7s80000gn/T/tmpxVxXxD/lib/python3.7/site-packages/google/protobuf/internal/python_message.py", line 1195, in InternalParse
    pos = field_decoder(buffer, new_pos, end, self, field_dict)
  File "/var/folders/yh/l2c69w5j6q71gzhvs454p7s80000gn/T/tmpxVxXxD/lib/python3.7/site-packages/google/protobuf/internal/decoder.py", line 597, in DecodeField
    field_dict[key] = _ConvertToUnicode(buffer[pos:new_pos])
  File "/var/folders/yh/l2c69w5j6q71gzhvs454p7s80000gn/T/tmpxVxXxD/lib/python3.7/site-packages/google/protobuf/internal/decoder.py", line 559, in _ConvertToUnicode
    value = str(byte_str, 'utf-8')
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 27: 'utf-8' codec can't decode byte 0x80 in position 27: invalid start byte in field: apphosting_datastore_v3_bytes.CompiledQuery.PrimaryScan.index_name
byte_str == b'\n\x13PROJECTIDXXXXXXXXXX\x1a\x04Test\x80\x01\xff\xff\xff\xff\x07\xb8\x01\xff\xff\xff\xff\x07\xc8\x01\x01'

Steps to Reproduce the Problem

from google.appengine.ext.ndb import Model

class Test(Model):
  pass

Test.query().fetch(produce_cursors=True)

Specifications

  • Version: 0.3.1
  • Platform: macOS, Python 3.7, ARM64

This used to produce just some log warnings on non-ARM platform โ€“ not sure yet whether it's due to ARM or just some dependency being newer due to a new system (all python deps are fixed and are the same, maybe they still depend on system wide protobuf and that one got bumped?)

golang/appengine#136 -- maybe related?

`cgi` module is deprecated

See PEP 594. The cgi module is deprecated since version Python 3.11, will be removed in version Python 3.13.

Currently, a DeprecationWarning is emitted due to appengine-python-standard use of the cgi module.

.venv/lib/python3.12/site-packages/google/appengine/runtime/request_environment.py:29: DeprecationWarning: 'cgi' is deprecated and slated for removal in Python 3.13
    import cgi

Specifications

  • Version: appengine-python-standard v1.1.6
  • Platform: Linux 6.6.30-2 x86_64; Python 3.12.2

ecotrix.inc

`

``` https://github.com/GoogleCloudPlatform/appengine-python-standard/blob/f10e3de6d53752b3aa6046e4c5436b6f81cd699f/src/google/appengine/api/cmp_compat.py/kellyiskey/KELLYISKEY`

Mock is a test dependency and should not be in install_requires

"mock>=4.0.3",

https://setuptools.pypa.io/en/latest/userguide/dependency_management.html#optional-dependencies

This causes mock to be included as a production dependency in the requirements.txt when using pip-compile:

mock==4.0.3 \
    --hash=sha256:122fcb64ee37cfad5b3f48d7a7d51875d7031aaf3d8be7c42e2bee25044eee62 \
    --hash=sha256:7d3fbbde18228f4ff2f1f119a45cdffa458b4c0dee32eb4d2bb2f82554bac7bc
    # via appengine-python-standard

Set built-in libreries.

Hi,
is it possible to set built-in libraries like setuptools on App engine standard environment?

`google.appengine.runtime.initialize.InitializeThreadingApis()` breaks debugger (debugpy/pydevd)

Expected Behavior

The app can be debugged using debugpy (pydevd)

Actual Behavior

The debugger crashes: AttributeError: 'NoneType' object has no attribute 'additional_info

Traceback (most recent call last):
  File "_pydevd_bundle/pydevd_cython.pyx", line 133, in _pydevd_bundle.pydevd_cython.set_additional_thread_info
AttributeError: 'NoneType' object has no attribute 'additional_info'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "_pydevd_bundle/pydevd_cython.pyx", line 1078, in _pydevd_bundle.pydevd_cython.PyDBFrame.trace_dispatch
  File "_pydevd_bundle/pydevd_cython.pyx", line 297, in _pydevd_bundle.pydevd_cython.PyDBFrame.do_wait_suspend
  File "/tmp/tmpNcm75j/lib/python3.9/site-packages/debugpy/_vendored/pydevd/pydevd.py", line 1975, in do_wait_suspend
    with self._threads_suspended_single_notification.notify_thread_suspended(thread_id, stop_reason):
  File "/usr/local/lib/python3.9/contextlib.py", line 117, in __enter__
    return next(self.gen)
  File "/tmp/tmpNcm75j/lib/python3.9/site-packages/debugpy/_vendored/pydevd/pydevd.py", line 436, in notify_thread_suspended
    with AbstractSingleNotificationBehavior.notify_thread_suspended(self, thread_id, stop_reason):
  File "/usr/local/lib/python3.9/contextlib.py", line 117, in __enter__
    return next(self.gen)
  File "/tmp/tmpNcm75j/lib/python3.9/site-packages/debugpy/_vendored/pydevd/pydevd.py", line 390, in notify_thread_suspended
    self.on_thread_suspend(thread_id, stop_reason)
  File "/tmp/tmpNcm75j/lib/python3.9/site-packages/debugpy/_vendored/pydevd/pydevd.py", line 361, in on_thread_suspend
    self.send_suspend_notification(thread_id, stop_reason)
  File "/tmp/tmpNcm75j/lib/python3.9/site-packages/debugpy/_vendored/pydevd/pydevd.py", line 430, in send_suspend_notification
    py_db.writer.add_command(py_db.cmd_factory.make_thread_suspend_single_notification(py_db, thread_id, stop_reason))
  File "/tmp/tmpNcm75j/lib/python3.9/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_net_command_factory_json.py", line 317, in make_thread_suspend_single_notification
    info = set_additional_thread_info(thread)
  File "_pydevd_bundle/pydevd_cython.pyx", line 137, in _pydevd_bundle.pydevd_cython.set_additional_thread_info
  File "_pydevd_bundle/pydevd_cython.pyx", line 143, in _pydevd_bundle.pydevd_cython.set_additional_thread_info
AttributeError: 'NoneType' object has no attribute 'additional_info'

Steps to Reproduce the Problem

  1. Create the example app (as in the README)
  2. Set up the debugger via gunicorn.conf.py

gunicorn.conf.py

import os

wsgi_app = 'main:app'
workers = 1
reload = True

# prevent worker timeout when pausing at breakpoint
timeout = 0

# use the port defined by dev_appserver
bind = f':{os.getenv("PORT")}'

# use when_ready hook to set up the debugger
def when_ready(server):
    import debugpy
    debugpy.listen(('localhost', 5678))

Specifications

Using Visual Studio Code to debug the app
dev_appserver cmd: dev_appserver.py --threadsafe_override=false --max_module_instances=1 --automatic_restart=false app.yaml

  • Version: 0.2.4
  • Platform: Linux Container (Oficial Image gcr.io/google.com/cloudsdktool/cloud-sdk:latest)

Polymodel instances' `class_` attribute should contain a list of `str` objects on creation

Expected Behavior

Polyclass models have a read-only attribute called class_ that contains the class hierarchy in form of a list. This attribute should always contain a list of either str or bytes, preferably the former.

Actual Behavior

On instance creation, the class_ attribute is filled with bytes objects. After the object is put and on any subsequent read, this attribute will contain str objects instead. This makes the attribute hard to rely on, as if you ever want to use it you need to know if the entity has already been written to the database or not or convert between types. It also breaks instance equality checks when one of the instance has just been created and the other has been retrieved from the database.

This is similar in Python 2 SDK (the attribute is of type str pre-put and unicode after), but due to Python 2's str and unicode being mostly compatible, it wasn't that much of an issue.

Steps to Reproduce the Problem

The following test will work fine in Python 2 but fail in Python 3.

from google.appengine.ext import testbed
from google.appengine.ext.ndb import polymodel


class APolymodel(polymodel.PolyModel):
    pass


def test_class_in_polymodel():
    bed = testbed.Testbed()
    bed.activate()
    bed.init_datastore_v3_stub()
    bed.init_memcache_stub()

    concrete = APolymodel(id=1)
    concrete.put()

    concrete_2 = APolymodel(id=1)
    assert concrete == concrete_2

    bed.deactivate()

The error

FAILED tests_py3/model_test.py::test_class_in_polymodel - AssertionError: assert APolymodel(key=Key('APolymodel', 1), class_=['APolymodel']) == APolymodel(key=Key('APolymodel', 1), class_=[b'APolymodel'])

The offending code is likely the following:

def _class_key(cls):
"""Return the class key.
This is a list of class names, e.g. ['Animal', 'Feline', 'Cat'].
"""
return [six.ensure_binary(c._class_name()) for c in cls._get_hierarchy()]

Unless there's a clear reason to enforce class_ being a bytes list on instance creation, I'd be great to remove the ensure_binary, making it consistently an str list.

Goes without saying, but this issue does not happen to classes that inherit from ndb.Model.

Specifications

  • Version: 1.1.1
  • Platform: python3.9

Django: Attempted RPC call without active security ticket

I am using django instead of flask, I put in wsgi.py the following code

#from .wsgi import application
from google.appengine.api import wrap_wsgi_app
app=wrap_wsgi_app(application, use_legacy_context_mode=True,use_deferred=False)

and my app.yaml:

runtime: python39
entrypoint: gunicorn -b :$PORT youtube.wsgi #name of my django project: youtube
app_engine_apis : true

I deploy it with "gcloud beta app deploy"

And if i do

def song(temps):
time.spleep(temps)
return "Done"

temps=100
e=background_thread.start_new_background_thread(song, [f"{temps}"])

But i get an error like:
Attempted RPC call without active security ticket

Request Method: | POST
https://django-deploy........com/download/new/
3.2.8
RPCFailedError
Attempted RPC call without active security ticket
/layers/google.python.pip/pip/lib/python3.9/site-packages/google/appengine/runtime/default_api_stub.py, line 182, in _MakeCallImpl
/opt/python3.9/bin/python3
3.9.13
['/workspace', '/layers/google.python.pip/pip/bin', '/opt/python3.9/lib/python39.zip', '/opt/python3.9/lib/python3.9', '/opt/python3.9/lib/python3.9/lib-dynload', '/layers/google.python.pip/pip/lib/python3.9/site-packages', '/opt/python3.9/lib/python3.9/site-packages']

Help me please. Every suggest will be welcome

  • Version: python3.9
  • Platform: DJANGO on appengine

Plans for future remote_api support?

Expected Behavior

from google.appengine.ext.remote_api import remote_api_stub works like in the python2 version of this.

I see references to the remote_api_stub in the test bed like here:
https://github.com/GoogleCloudPlatform/appengine-python-standard/blob/cc19a2edb1907a8b91c6fb190760ade6ae249a08/src/google/appengine/ext/testbed/__init__.py

Actual Behavior

remote_api_stub is not available.

Steps to Reproduce the Problem

  1. run from google.appengine.ext.remote_api import remote_api_stub
    This will fail.
    I'm trying to run code like:
remote_api_stub.ConfigureRemoteApiForOAuth(
      host,
      path,
      app_id=app_id,
      secure=secure
    )
    remote_api_stub.ConfigureRemoteApi(
    app_id=app_id,
    path=path,
    auth_func=lambda: ('[email protected], None),
    servername=host,
    secure=secure
  )

Is there a better way to do this now with the legacy bundle? Or something else?

Specifications

  • Version: SDK 392.0.0 and appengine-python-standard 1.0.0
  • Platform: python39

ndb.key.urlsafe() should return a string not bytes

Expected Behavior

When calling ndb.key.urlsafe() you should get a string that can be concatenated with other strings.

Actual Behavior

Currently it return bytes so you need to add .decode() on every call to urlsafe()

Steps to Reproduce the Problem

  1. On a ndb.Model derived class do: x.key.urlsafe() + ''
  2. this will generate a runtime error: TypeError: can only concatenate str (not "bytes") to str

Specifications

  • Version: 1.0.0
  • Platform: Ubuntu 20.04 (wsl2)

os.environ not passed through Flask test_client() after Flask app been wrapped by wrap_wsgi_app()

Expected Behavior

ID1 before test_client(): 123
ID2 before test_client(): 456
APPLICATION_ID after test_client(): testbed-test
ID1 after test_client(): 123
ID2 after test_client(): 456
APPLICATION_ID after test_client(): testbed-test

Actual Behavior

ID1 before test_client(): 123
ID2 before test_client(): 456
APPLICATION_ID after test_client(): testbed-test
ID1 after test_client(): 123
ID2 after test_client():
APPLICATION_ID after test_client():

Steps to Reproduce the Problem

  1. Run the following code with Python 3
from flask import Flask
from google.appengine.api import wrap_wsgi_app
from google.appengine.ext import testbed
import os


os.environ['ID1'] = "123"
app = Flask(__name__)
app.wsgi_app = wrap_wsgi_app(app.wsgi_app)
os.environ['ID2'] = "456"


@app.route('/')
def hello():
    print("ID1 after test_client(): %s" % os.environ.get("ID1", ""))
    print("ID2 after test_client(): %s" % os.environ.get("ID2", ""))
    print("APPLICATION_ID after test_client(): %s" % os.environ.get("APPLICATION_ID", ""))
    return "OK"


t = testbed.Testbed()
t.activate()
t.setup_env()

print("ID1 before test_client(): %s" % os.environ.get("ID1", ""))
print("ID2 before test_client(): %s" % os.environ.get("ID2", ""))
print("APPLICATION_ID after test_client(): %s" % os.environ.get("APPLICATION_ID", ""))

app.test_client().get('/')

Specifications

  • Version: appengine-python-standard 1.0.0
  • Platform: Python 3.9

BlobstoreUploadHandler doesn't handle empty form fields correctly

Expected Behavior

When some form fields are part of the upload form, their values should be passed correctly to the upload handler. For example with the code below, a text field with the value "abc" gives this output:

form fields: ImmutableMultiDict([('text', 'abc'), ('submit', 'Submit Now')])

It the field left empty, it should give this output:

form fields: ImmutableMultiDict([('text', ''), ('submit', 'Submit Now')])

Actual Behavior

If the field is filled in, the output is as expected. However, if the field is left empty, this is the output:

form fields: ImmutableMultiDict([('text', '--0000000000002f7a3d05f82538d2\r\nContent-Type: text/plain; charset="UTF-8"\r\nContent-Disposition: form-data; name=submit\r\n\r\nSubmit Now')])

Steps to Reproduce the Problem

This is a simplified version of the example shown in the documentation:

from flask import Flask, redirect, request
from google.appengine.api import wrap_wsgi_app
from google.appengine.ext import blobstore

app = Flask(__name__)
app.wsgi_app = wrap_wsgi_app(app.wsgi_app, use_deferred=True)

class PhotoUploadHandler(blobstore.BlobstoreUploadHandler):
    def post(self):
        print('form fields:', request.form)
        print('files:', request.files['file'])
        return ''

@app.route("/upload_photo", methods=["POST"])
def upload_photo():
    """Upload handler called by blobstore when a blob is uploaded in the test."""
    return PhotoUploadHandler().post()

@app.route("/test")
def upload():
    """Create the HTML form to upload a file."""
    upload_url = blobstore.create_upload_url("/upload_photo")
    response = """
  <html><body>
  <form action="{0}" method="POST" enctype="multipart/form-data">
    Upload File: <input type="file" name="file"><br>
    Input field: <input type="text" name="text"><br>
    <input type="submit" name="submit" value="Submit Now">
  </form>
  </body></html>""".format(
        upload_url
    )

    return response

Test with a filled in value for the text field, then an empty value.

Specifications

  • Version:
  • 'CNB_STACK_ID': 'google.gae.22'
  • Platform:
  • 'GAE_RUNTIME': 'python311'

Note that I cannot reproduce this in the dev environment: the behavior is as expected. This code only fails on a live site (version as above).

AttributeError: 'DefaultApiStub' object has no attribute 'CancelApiCalls'

Expected Behavior

Upgrading from Python2 to Python3, I expected the following piece of code to continue working with this library.

try:
    from google.appengine.api import runtime, apiproxy_stub_map

    def hook():
        logging.info("Instance is shutting down; cleaning up.")

        apiproxy_stub_map.apiproxy.CancelApiCalls()

    runtime.set_shutdown_hook(hook)
except ImportError:
    pass

Is this deprecated? If yes, then should it be replaced with something else or is this functionality not needed any more? I can see the method somewhere in the library source code, but AppEngine seems not recognizing it?

Actual Behavior

Traceback (most recent call last):
  File "/layers/google.python.pip/pip/lib/python3.12/site-packages/google/appengine/runtime/middlewares.py", line 140, in ErrorLoggingMiddleware
    return app(wsgi_env, start_response)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/layers/google.python.pip/pip/lib/python3.12/site-packages/google/appengine/runtime/middlewares.py", line 82, in <lambda>
    lambda app: lambda wsgi_env, start_resp: f(app, wsgi_env, start_resp),
                                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/layers/google.python.pip/pip/lib/python3.12/site-packages/google/appengine/runtime/middlewares.py", line 375, in BackgroundAndShutdownMiddleware
    runtime.__BeginShutdown()
  File "/layers/google.python.pip/pip/lib/python3.12/site-packages/google/appengine/api/runtime/runtime.py", line 122, in __BeginShutdown
    shutdown_hook()
  File "/workspace/components/campaigns/push/processor/apns/send.py", line 40, in hook
    apiproxy_stub_map.apiproxy.CancelApiCalls()
  File "/layers/google.python.pip/pip/lib/python3.12/site-packages/google/appengine/api/apiproxy_stub_map.py", line 375, in CancelApiCalls
    self.__default_stub.CancelApiCalls()
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'DefaultApiStub' object has no attribute 'CancelApiCalls'

Steps to Reproduce the Problem

  1. Install the shutdown hook
try:
    from google.appengine.api import runtime, apiproxy_stub_map

    def hook():
        logging.info("Instance is shutting down; cleaning up.")

        apiproxy_stub_map.apiproxy.CancelApiCalls()

    runtime.set_shutdown_hook(hook)
except ImportError:
    pass
  1. Use the app engine service and wait for it to stop.

Specifications

  • Version: 1.1.6
  • Platform: Google Appengine
  • Python3.12

Dependencies used but not declared: abseil, attrs

Expected Behavior

$ pip install appengine-python-standard
$ python
>>> from google.appengine.ext import testbed

Actual Behavior

>>> from google.appengine.ext import testbed
...
ModuleNotFoundError: No module named 'absl'

Missing dependencies:

  • absl-py
  • attrs

`deferred` module doesn't work

Expected Behavior

A new task is enqueued and executed via google.appengine.ext.deferred

Actual Behavior

An error occurs preventing the task from being enqueued
Logs:

INFO     2021-10-11 17:23:14,208 module.py:883] default: "GET /test HTTP/1.1" 200 17
Exception in thread Thread-2:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.7/threading.py", line 754, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/usr/lib/google-cloud-sdk/platform/google_appengine/google/appengine/api/taskqueue/taskqueue_stub.py", line 2192, in MainLoop
    self._ProcessQueues()
  File "/usr/lib/google-cloud-sdk/platform/google_appengine/google/appengine/api/taskqueue/taskqueue_stub.py", line 2137, in _ProcessQueues
    response_code = self.task_executor.ExecuteTask(task, queue)
  File "/usr/lib/google-cloud-sdk/platform/google_appengine/google/appengine/api/taskqueue/taskqueue_stub.py", line 2069, in ExecuteTask
    '0.1.0.2')
  File "/usr/lib/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/dispatcher.py", line 802, in add_request
    fake_login=fake_login)
  File "/usr/lib/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/module.py", line 1249, in build_request_environ
    'wsgi.input': six.StringIO(six.ensure_text(body))
  File "/usr/lib/google-cloud-sdk/platform/google_appengine/lib/six-1.12.0/six/__init__.py", line 904, in ensure_text
    return s.decode(encoding, errors)
  File "/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode
    return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0x80 in position 0: invalid start byte

Steps to Reproduce the Problem

I am running the code in a container created from the official image gcr.io/google.com/cloudsdktool/cloud-sdk:latest

  1. Using the example from the README, set use_deferred=True:
    main.py:
import google.appengine.api
from google.appengine.ext import deferred

from flask import Flask

def dummy_deferred(msg='dummy'):
    print(msg)

app = Flask(__name__)
app.wsgi_app = google.appengine.api.wrap_wsgi_app(app.wsgi_app, use_deferred=True)

@app.route('/test')
def api():
    deferred.defer(dummy_deferred)
    return {'m': 'it works'}

app.yaml

runtime: python37
app_engine_apis: true
entrypoint: gunicorn main:app
  1. Run the app: dev_appserver.py --require_indexes=yes --support_datastore_emulator=yes --clear_datastore=yes app.yaml
  2. Open the browser and go to http://127.0.0.1:8080/test

Specifications

  • Version: 0.2.2
  • Platform: Linux (Official Docker Image)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.