Headache-free concurrency in Python with Ori
At Neocrym, we write a lot of Python code that we want to go really fast.. and we typically have a great many CPU cores available to us. And we wrote a new Python library to help make it happen.
At Neocrym, we use AWS pretty heavily to run distributed web crawling and terabyte-scale machine learning. To make that easier, we built Provose.
At Neocrym, we use Amazon Web Services pretty heavily to run distributed web crawling and terabyte-scale machine learning.
We build all of our AWS infrastructure using HashiCorp Terraform. We love Terraform because it is declarative. Instead of describing the steps to build our infrastructure, we describe our destination state and the Terraform runtime calculates how to get there from the infrastructure we already have in production.
But Terraform is also very low-level, and building complicated infrastructure can quickly get very verbose.
We got tired of writing redundant Terraform, so we wrote Provose--a Terraform module that creates a high-level abstraction over AWS. You just describe to Provose the EC2 instances, containers, databases, and filesystems that you want to deploy, and then Provose will automatically create:
Below is an example where we run the NGINX "Hello World" container on ten AWS Fargate instances behind a load balancer at https://fargate.example.com. You can read more about this configuration in the containers section of the Provose documentation.
We have found that configurations with Provose are typically 10x shorter than the equivalent Terraform configuration.
Provose 2.0 is the current stable version, with 3.0 currently in beta. Below are some Provose 3.0 documentation links explaining how to quickly set up:
You can find tutorials and documentation at provose.com, and feel free to drop by out GitHub repository at github.com/provose/provose.