DEVELOPER BLOG

HOME > DEVELOPER BLOG > 【Introduction to Serverless】Launching Containers Serverless with Cloud Run - PrismScaler

【Introduction to Serverless】Launching Containers Serverless with Cloud Run - PrismScaler

1. Introduction

Hello! We are a writer team from Definer Inc. In this issue, you are wondering how to start a container serverless with Cloud Run. Let's take a look at the actual screens and resources to explain in detail.

2. Purpose/Use Cases

Serverless containers offer several benefits that make them a popular choice for application deployment. Here are some key advantages:
  1. Scalability: Serverless containers automatically scale based on demand. They can handle sudden spikes in traffic without requiring manual intervention or capacity planning.
  2. Resource Efficiency: Serverless containers enable efficient resource allocation. They allow for granular scaling, meaning that containers can be provisioned and deprovisioned as needed, reducing resource wastage. This flexibility ensures optimal resource utilization, resulting in improved efficiency.
  3. Simplified Deployment: With serverless containers, the infrastructure management is abstracted away, allowing developers to focus on application logic rather than infrastructure setup and maintenance. Developers can package their applications into containers and deploy them using simple commands or through container orchestration platforms like Kubernetes.
  4. Faster Time to Market: Serverless containers enable rapid development and deployment cycles. Developers can iterate on their code quickly and deploy changes without worrying about underlying infrastructure.
  5. Automatic High Availability: Serverless container platforms typically provide built-in redundancy and high availability features. Containers are automatically distributed across multiple availability zones or regions, ensuring that your application remains highly available even in the event of failures.
  6. Enhanced Security: Serverless containers offer enhanced security features compared to traditional server-based deployments. Container isolation prevents applications from interfering with each other, and security patches and updates are typically handled by the platform provider.
  7. Improved Development Experience: Serverless containers promote a smooth development experience by supporting modern development practices like continuous integration and continuous deployment (CI/CD).
  This article summarizes information and practices that can be helpful when you want to launch containers serverless with Cloud Run.

3. What is Cloud Run?

Cloud Run is a fully managed serverless compute platform provided by Google Cloud Platform (GCP). It allows you to run stateless containers in a serverless environment, abstracting away the underlying infrastructure and auto-scaling your applications based on incoming requests. Cloud Run provides an easy and efficient way to deploy and manage your containerized applications without the need to manage servers or worry about infrastructure scalability.
 
Here are some key features and benefits of Cloud Run on GCP:
  1. Serverless Architecture: Cloud Run abstracts away server management and scaling. You only need to focus on deploying your containerized applications, and Cloud Run takes care of the rest. It automatically scales your applications in response to incoming traffic, ensuring high availability and optimal resource utilization.
  2. Container Compatibility: Cloud Run supports containers built using Docker and adhering to the Open Container Initiative (OCI) standards. You can easily package your applications into containers, making it simple to migrate and deploy existing workloads to Cloud Run.
  3. Pay-per-Use Billing: With Cloud Run, you pay only for the actual compute resources consumed during request processing. You are billed based on the number of requests, the duration of each request, and the allocated memory. This pay-per-use model offers cost efficiency, as you don't have to pay for idle resources.
  4. Horizontal Autoscaling: Cloud Run automatically scales your containers horizontally based on incoming request traffic. It can scale from zero to thousands of requests per second, ensuring that your application can handle high loads during peak times while minimizing costs during periods of low activity.
  5. Integration with Google Cloud Ecosystem: Cloud Run integrates seamlessly with other Google Cloud services. You can easily connect to services like Cloud Logging, Cloud Monitoring, Cloud Pub/Sub, Cloud Storage, and more. This allows you to leverage the broader GCP ecosystem for building and managing your applications.
  6. Quick Deployment and Rapid Iteration: Deploying applications on Cloud Run is straightforward. You can use the command-line interface (CLI), web console, or continuous integration/continuous deployment (CI/CD) pipelines to deploy your containers quickly. This enables rapid iteration and faster time to market for your applications.
  7. Multi-Region Deployment: Cloud Run allows you to deploy your applications in multiple regions around the world, providing low-latency access to your users. You can leverage the global infrastructure of Google Cloud to ensure that your application is closer to your customers, improving performance and user experience.
  8. Security and Identity Integration: Cloud Run integrates with Google Cloud security features, such as Identity and Access Management (IAM) and Cloud Identity-Aware Proxy (IAP).
 
Since applications can be deployed as container images, they are highly flexible and can be used for many use cases:
  • Web Applications and APIs
  • Serverless Microservices
  • Batch Processing and Data Pipelines
  • Webhooks and Event-driven Workloads
  • Micro Frontends
  • Prototyping and Development Environments
  • Content Management Systems (CMS)
 
Now, let's launch the container.

4. Serverless container launch with Cloud Run

Let's try Cloud Run right away.  

(1) Start Cloud Run

Access the Google Cloud console and click "Cloud Run" > "Create Service".
 
Specify an image from Google Container Registry or Artifact Registry for "Container Image URL". This parameter specifies the URL or reference to the container image, which is typically hosted in a container registry like Google Container Registry or Artifact Registry.
 
This time, we specified "us-docker.pkg.dev/cloudrun/container/hello", which is a sample container.
 
Enter the service name and region.
 
In this case, we set the "maximum number of instances" for auto scaling to 1. This parameter sets an upper limit on the number of requests that a container can handle before it is terminated and replaced. It helps control resource usage and prevents long-running containers from accumulating excessive load.
 
For authentication, "Allow unauthenticated calls" is set, it means that anyone, without any authentication or authorization, can make requests to your Cloud Run service.
 
 

(2) Confirmation of Container Startup

Go to the Cloud Run details screen and click on the URL to see that the container is up and running!
 

5. Cited/Referenced Articles

6. About the proprietary solution "PrismScaler"

・PrismScaler is a web service that enables the construction of multi-cloud infrastructures such as AWS, Azure, and GCP in just three steps, without requiring development and operation. ・PrismScaler is a web service that enables multi-cloud infrastructure construction such as AWS, Azure, GCP, etc. in just 3 steps without development and operation. ・The solution is designed for a wide range of usage scenarios such as cloud infrastructure construction/cloud migration, cloud maintenance and operation, and cost optimization, and can easily realize more than several hundred high-quality general-purpose cloud infrastructures by appropriately combining IaaS and PaaS.  

7. Contact us

This article provides useful introductory information free of charge. For consultation and inquiries, please contact "Definer Inc".

8. Regarding Definer

・Definer Inc. provides one-stop solutions from upstream to downstream of IT. ・We are committed to providing integrated support for advanced IT technologies such as AI and cloud IT infrastructure, from consulting to requirement definition/design development/implementation, and maintenance and operation. ・We are committed to providing integrated support for advanced IT technologies such as AI and cloud IT infrastructure, from consulting to requirement definition, design development, implementation, maintenance, and operation. ・PrismScaler is a high-quality, rapid, "auto-configuration," "auto-monitoring," "problem detection," and "configuration visualization" for multi-cloud/IT infrastructure such as AWS, Azure, and GCP.