We're a SaaS platform running on App Engine and were part of the early-access program. Porting over our Python 3 based platform from App Engine Flex to AE Standard took just a few hours - only tweaks needed were to the YAML config files and rewrite our deployment scripts after creating a new environment and project.
We still maintain a docker-based environment on AWS EBS which we were able to port over as well.
Overall a fantastic experience. The App Engine Standard team provided stellar support too.
We did notice pure CPU performance on AE standard to be less than dedicated AE Flex instances, but on a price/performance ratio AE standard is 10X better for infrequently accessed micro-services which will allow us to migrate some of these to standard for great cost savings - as well as ramp up auto-scale to handle spikes automagically.
With Python 2 becoming officially deprecated in 2020 this means App Engine Standard just got a huge boost in long term relevance. Great for developers who want to deploy without having to deal with infrastructure.
Good to hear that AE standard - i.e. this vision of a 0-effort Python deployment support - is still relevant after all the Kubernetes-related announcements at Google Next.
Just curious, why did you migrate from Flex to Standard? Only for the 10X price/performance boost for infrequently accessed services?
PS: Regarding automagically handling spikes I have a warning that might be useful for you.
The way AE load balancer works is that it:
0. A request comes.
1. AE checks for available instance. If none available:
2. Spins up new instance.
3. !!! Assigns the request to that instance.
4. Waits for /_ah/warmup to finish.
5. Adds that instance to its serving pool so other requests may be routed there as well.
Note that step 4 is after step 3, so at least one request will wait for /_ah/warmup. So occasionally you will see requests with high latency (if your warmup is slow, of course). Our latency requirements were quite high, so we ended up switching to manual scaling and heavily over-provision to handle spikes. That costed us good money, though.
> Only for the 10X price/performance boost for infrequently accessed services?
Yes. this will allow us to offload async tasks & other non-critical tasks to standard. It will also allow us to keep lots of various staging environments available for testing / unit testing, and deploy dedicated instances that are fully contained without 24/7 costs for each instance.
Re - latency - absolutely, we've seen times where it can take 10-15 seconds to serve an initial request. Things are snappy afterwards.
> At this time, the original App Engine-only APIs are not available in Second Generation runtimes, including Python 3.7.
This means that if you already have a significant app running on Appengine Standard, none of the built-in APIs are supported (memcache, images, search, task queues, email, etc). That's a significant blocker to anyone wanting to upgrade their existing Standard 2.7 app to 3.7.
It seems to me this product is mainly targeted at 1) new customers, and 2) AE Flex customers wanting to switch to Standard.
The third category (Existing 2.7 Standard customers wanting to just upgrade to 3.7 -- which is me unfortunately) are going to be disappointed and stuck in a hard place, I think.
Pretty sure they're being sarcastic. But I'm guessing the commenter hasn't used Google Cloud anyway - Python 3 was supported on Flex enviroments for a long time. The news here is that they're supporting it on Standard environments, and there seems to be good technical reasons for taking so long.
Is GRPC supported? My gut feelign is no, but I hope I am wrong
Edit: Holy shit this is so bittersweet. I was relying heavily on the batteries included user management, logging, admin only handlers, and especially the super easy cron job definitions.
IDK what to think about this. It kinda takes away all the things that made App Engine worth it for me as a programmer who doesnt want to do dev ops.
AFAIK you can't run a gRPC server on App Engine, but you can access other gRPC services (like all the GCP APIs) as a client. I might be wrong about the server part.
All those things you mentioned in the edit should be supported in the new runtime.
Edit: I see what you mean. The deeply integrated services are removed in favor of more portable solutions, like Logging moving to Stackdriver Logging (though stdout is still logged automatically), Users moving to Firebase Auth, etc. Cron should still work fine though.
Thanks for the reply. Yeah I'm bummed about that because id rather use GRPC client libraries for my apps than the endpoints discovery doc generated ones.
That page mentions that it supports the google-cloud-python libraries, which are the primary GCP gRPC client libraries for Python. The Apiary service-discovery library you are referring to is called google-api. The new GAE version doesn't support the old GAE-only libraries, like memcache, datastore, etc. These older services only worked with GAE, and therefore weren't portable to other services, which they are in the process of replacing. If I recall, the old version didn't support the google-cloud-python libraries either, so it's good to see that they are moving in that direction. I could be wrong about that though.
In short, it looks like you have access to the main GCP gRPC libraries. You don't have access to the GAE-only service libraries.
The Google Cloud client libraries [1] are fully supported. You can use these to access services like Datastore, Spanner, Natural Language, and many other Cloud services. They work on this new runtime, on the flexible environment, on a VM, your local machine, etc. In my opinion, this is the most Pythonic way to integrate Cloud services in your app while maintaining portability.
[1] https://github.com/GoogleCloudPlatform/google-cloud-python -- small caveat, the repo currently states that they're not supported on App Engine standard environment. That will be updated soon to clarify that the Python 3.7 runtime on the standard environment is supported.
We're working on suitable replacements for these services. In some cases, e.g., Cloud Tasks, we're pretty close. Others will take a little longer. We'll share updates as we make more of these services available.
Also I did find that someone wrote a ndb-like wrapper around the cloud datastore API.
https://github.com/Bogdanp/anom-py
I haven't tried it out yet.
Although i think any reasonable path to upgrading to 3.7 might need atleast some wrapper like that around the datastore(even if it's not 100% compatible), otherwise forcing people relying on that to re-write the entire data layer. At this point i might just start re-writing the app, to move to something with less lock-in like Mongodb. But i'd love to hear what other people's plans are for this.
"The ndb ORM library is not available for Python 3. You can access Cloud Datastore through the Cloud Datastore API. You can use the Google Cloud client libraries to store and retrieve data from Cloud Datastore."
What is the access to your hosted postgres like? Do you still have to use that proxy thing? I'm wondering how much effort is require to move a sqlalchemy based app over to this.
If you're an existing AppEngine standard user/customer, it's pretty likely that that means you will not be able to take advantage of PostgreSQL, as the new runtime does NOT give you access to a ton of their existing services that you are probably already using (such as datastore.)
Hi optimusclimb -- just want to make clear that using Cloud SQL to connect to PostgreSQL should work using Python 3.7 on the App Engine standard environment. Your earlier comment accurately pointed out a gap in our docs. We're going to address that -- thank you.
If you try Cloud SQL and find that it doesn't work with PostgreSQL on this new runtime, that's a bug and we need to fix it.
I think what he's saying is you're currently using Python 2.7 on Standard and using existing built-in services like task queue, memcache, ndb, etc then switching to the new Python 3.7 runtime (to be able to connect to Postgres) runtime really isn't possible since there aren't non-rewrite solutions for those missing pieces.
The (datastore) client libraries take up the http request quota and the build-in mechanism of the Standard Environment is not. Google recommends going forward with client libraries everywhere in the documentation but what's the solution on the introduced request limitation which isn't mentioned anywhere? (Or is this finally fixed in the meanwhile?)
We hit that HTTP request limit as well, and (driven by a discussion with Google support) migrated our backend away from the REST-based Google Cloud Datastore API to the internal (ApiProxy-based) Appengine API. Reduced the latency as well. I can send you the internal support ticket nr if interested.
Nice launch! I have some Python code which I've been meaning to convert to App Engine standard to take advantage of the free tier, and I hadn't yet adapted it to the proprietary APIs. Now I don't have to. Thanks!
One Q: Does Google Cloud IAP work with this as an option for user auth? Would one follow the App Engine flexible environment pattern in their docs? The Cloud IAP docs still mention the Users API as the recommended approach for the App Engine standard environment without an exception for the Python 3.7 runtime.
> This new runtime allows you to take advantage of Python's vibrant ecosystem of open-source libraries and frameworks. While the Python 2 runtime only allowed the use of specific versions of whitelisted libraries, Python 3 supports arbitrary third-party libraries, including those that rely on C code and native extensions. Just add Django 2.0, NumPy, scikit-learn or your library of choice to a requirements.txt file. App Engine will install these libraries in the cloud when you deploy your app.
Yes, this runtime supports arbitrary dependencies specified via a `requirements.txt` file. If you find a library that doesn't work, please let us know. In general, anything that works using an open source Python 3.7 distribution should be supported.
Incidentally, the best way to "let us know" about bugs in GCP (App Engine or whatever else) is via the Issue Trackers, as per https://cloud.google.com/support/docs/issue-trackers (disclaimer: making this and similar processes work is my current job at Google).
If I had to guess, I'd say the most relevant changes for App Engine use may be the optimizations documented at https://docs.python.org/3/whatsnew/3.7.html#whatsnew37-perf : the `sort` method of lists about 50% faster in common cases, the `copy` method of dicts up to 5 times faster, etc.
Of significant interest to people who are stuck/like writing Python would be the async changes in 3.5 and 3.6. There have also been constant type annotation enhancements made throughout the 3.x releases.
Speaking for myself: 3.7 is the newest release with all the nice shiny, new features. Knowing that I could actually use the most current tools makes AE much more attractive than it was before this announcement.
It took them so long, I think pointing out that it's 3.7 emphasizes that they're finally caught up.
In the context that it's taken them a decade to upgrade, just seeing "Python 3" makes me wonder if they're only supporting some ancient version like 3.4 or even 3.2.
Launching these new unmodified Second Generation runtimes required us to develop new security and isolation technology (based on gVisor [1]). This allows us to securely run arbitrary code on shared data centers with isolation guarantees. This took us significantly longer than expected. The good news is, now that we have this new stack in place, we should be able to deliver runtime updates significantly faster.
That said, they don't quite go into the details of what type of isolation is missing from standard containers - I'm curious. It does seem like it would have been ideal for everyone if LXC would have had better isolation, rather than having to run a userspace kernel emulator thingy for each container, but c'est la vie!
I work on gVisor. The answer is that having a separate kernel is required to achieve a high degree of isolation and by definition Linux containers share a kernel with the host. A separate Linux kernel could work as well, but gVisor tries to achieve a different set of trade-offs.
Very few people were demanding Python 3 support 10 years ago. Prior to 3.4 uptake was limited so we’re really talking about 4 years.
Also, old appengine was built with nacl (remember that?) sandboxing. Anything that couldn’t be built under the nacl sandbox couldn’t run in app engine. Google realized this was a problem long ago but it takes time to rebuild your whole platform to eliminate such a fundamental dependency. That may have taken most of their focus, leaving little for other projects. and of course their new arch makes python 3 free so it’s difficult to have a parallel engineering effort that will be rapidly deprecated.
Thanks! Unfortunately, there's a link to Cloud Tasks API on that page that 404s and there's not much else info on it. As far as I can tell, it's alpha. Maybe I can find the Google Next presentation on it.
Do you have suggestions on whether to deploy simple (functional - no state or external resources used, except perhaps usage tracking) APIs as cloud functions or on the App Engine? What would be the "turning point" when it's best to start considering App Engine?
We still maintain a docker-based environment on AWS EBS which we were able to port over as well. Overall a fantastic experience. The App Engine Standard team provided stellar support too.
We did notice pure CPU performance on AE standard to be less than dedicated AE Flex instances, but on a price/performance ratio AE standard is 10X better for infrequently accessed micro-services which will allow us to migrate some of these to standard for great cost savings - as well as ramp up auto-scale to handle spikes automagically.
With Python 2 becoming officially deprecated in 2020 this means App Engine Standard just got a huge boost in long term relevance. Great for developers who want to deploy without having to deal with infrastructure.
Good to hear that AE standard - i.e. this vision of a 0-effort Python deployment support - is still relevant after all the Kubernetes-related announcements at Google Next.