DESIGN PATTERNS in CLOUD COMPUTING #1

Waruni Lalendra
12 min readSep 1, 2023

--

Importance of Design Patterns in Cloud Computing — Part 01

When it comes to coding, we all know what design patterns are and what they do. When we talk about design patterns, most of us think about the ones that we use in coding, such as singleton patterns, adapter patterns, etc. In there, we use those patterns to address repeatable coding problems in an effective way. Likewise, we have effective and well-formatted solutions for repeatable problems raised in cloud computing. Those are known as ‘cloud design patterns’.

In coding, those design patterns help to address major problems such as code duplication, lack of structure, poor scalability, etc. In the area of cloud design patterns, those patterns help to address issues related to cloud application development.

Before we understand each issue in detail, let’s recap what cloud computing means.

“A style of computing where scalable and elastic IT-enabled capabilities are provided as a service to external customers using Internet technologies”.

-Gartner-

In basic terms, cloud computing is like a team of computers that work together over the internet. Instead of having just one computer, you can use many computers that share their power to do tasks. It’s like getting help from a bunch of friends online to get things done faster and easier.

Now let’s have an overall idea of the higher-level model of cloud computing.

Figure 1— Higher level model of a cloud application
  1. Service Usage Metering: In cloud computing, it means keeping tabs on how much you’re using different services so you know what you’re paying for.
  2. Service Usage Metering, Instrumentation, and Telemetry: Just like a dashboard in a car tells you how fast you’re going and how much fuel you have, in cloud computing, this is about having tools that show you how your cloud services are working.
  3. DevOps: DevOps is about making sure everyone (developers and operations) works together to build and run apps.
  4. Users: These are the people using the apps you create. They interact with the apps’ interfaces and features.
  5. Compute Partitioning: In cloud computing, it’s splitting tasks into smaller bits that different computers handle.
  6. Auto-scaling: Imagine a fan that speeds up when it gets hot. In cloud computing, auto-scaling means adding more computers to help when things get busy and removing them when it’s not so busy.
  7. STS — Security Token Service and IDP — Identity Provider: These are like special locks and keys for your apps. They help make sure only the right people can access the apps and services.
  8. Caching: Think of this as having a quick access drawer for stuff you use often. In cloud computing, caching stores important data close by, so your app doesn’t have to search far for it.
  9. Multi DC Deployment: In cloud computing, it’s spreading your app across different locations so it’s available even if one location has a problem.
  10. Data Replication and Synchronization: It’s about copying and updating information in different locations to keep it consistent.
  11. Data Partitioning: In cloud computing, it’s splitting data into groups, so different computers handle different parts.
  12. Data Consistency Primer: In cloud computing, it’s making sure all computers have the same updated data.
  13. Asynchronous Messaging Primer: It’s when computers exchange messages without waiting for an instant reply.
  14. External STS/IDP and External Services or On-Premises: These are like having special keys from outside your house to open doors or getting stuff from your own home for certain tasks.

So, we can understand that using cloud computing is not as simple as using “on-premises” or “on-prem” computing. There are issues (Figure 1) that we should address before using cloud computing effectively.

Cloud Application Development Issues

Figure 2— Cloud Application Development Issues
  1. Availability: Cloud computing involves using remote servers, which means your app relies on networks and data centers. If those connections or data centers have problems, your app might not be available. Also, when many people use the same cloud resources, it can lead to congestion and slowdowns, affecting availability.
  2. Data Management: Cloud apps store data on various servers across different locations. When these servers need to communicate and sync data, delays or failures in this communication can lead to inconsistencies or outdated information.
  3. Design and Implementation: In cloud computing, you’re often dealing with distributed systems, meaning parts of your app might be on different servers. If the design isn’t consistent and well-structured, coordinating these parts becomes challenging, leading to deployment and maintenance issues.
  4. Messaging: In the cloud, different parts of your app might not be on the same physical server. Messaging between these parts involves sending data over networks, and delays or network issues can cause messages to arrive out of order or not at all.
  5. Management and Monitoring: Cloud apps run on servers you don’t directly control. Monitoring and managing these remote resources can be more challenging than having servers on-site, as you need to rely on tools and services provided by the cloud provider.
  6. Performance and Scalability: Cloud computing involves shared resources. When more users join, the demand for those shared resources increases. If the app isn’t designed to handle this scaling, performance can suffer due to resource limitations.
  7. Resiliency: In the cloud, your app might run on different servers at different times. If one server fails, your app should switch to another. If this switch isn’t smooth or well-designed, your app might experience downtime or inconsistent behavior.
  8. Security: Cloud computing involves sending data over the internet and storing it on remote servers. Without proper security measures, this data can be intercepted during transmission or accessed by unauthorized users on the servers

So, to address these issues effectively, we have cloud design patterns. Let’s discuss those patterns one by one.

01. Cache-Aside Pattern

Figure 3- Cache aside pattern

Imagine you have a bunch of books in a library (your data store), and you want to have a smaller shelf (cache) in your room to keep the books you’re currently reading. The Cache-Aside Pattern is like deciding which books to put on your shelf when you want to read them.

Load on Demand: In the Cache-Aside Pattern, you only put a book on your shelf when you want to read it, instead of putting all books there in advance. Similarly, data is loaded into the cache from the data store only when it’s requested.

Caching data makes it faster to access because it’s closer to your application, avoiding the need to go to the original data store. But, when using caching, you need to make sure that the data in the cache is kept up-to-date with the data in the original store.

Cloud platforms offer services like Azure Cache, AWS ElastiCache, Google App Engine memcache, or Redis Cache to help manage your cached data.

When you create a cache for your application with the above services, you can access and use it in your application. The below code snippet explains how we do that with Python.

When to Use the Cache-Aside Pattern: You can use this when you want to improve the speed of reading and writing data in your application.

Parameters to Consider:

  1. What to Cache: Decide which pieces of data you want to put in the cache. Typically, you cache data that is accessed often, so your application can grab it quickly.
  2. Lifetime of Cached Data: Determine how long the data should stay in the cache before it’s considered outdated. You wouldn’t want to show users old information.
  3. Cache Size: Figure out how much space you want to allocate for your cache. Caches can’t store everything, so you need to decide how much memory or storage to dedicate to it.
  4. Evicting Data: This is like cleaning up your cache. If your cache is full and you want to add something new, you might need to remove less-used items to make space. Decide how to choose which data to remove (“evict”).
  5. In Memory Caching: This means storing data in the computer’s fast memory (RAM), which is much quicker to access than regular storage like hard drives.

02. Competing Consumers Pattern

Figure 4 — Competing consumer pattern

The Competing Consumers Pattern is like having a group of friends help you process tasks. Imagine you provide a service that gets lots of requests. Instead of tackling each request one by one, you pass them to your friends, who handle them at the same time, like opening and processing letters together.

Lots of Requests: Your app gets many requests from users, like orders or messages. Instead of dealing with them all yourself, you send them to a special place called a message queue.

Team of Helpers: Imagine each request as a letter. You have a bunch of friends (consumers) who open the letters and do the tasks inside. Each friend (consumer) can work on their own letter (request) at the same time.

Faster Work: With everyone working on tasks together, things get done much faster. This is especially useful when the number of requests changes a lot, like during busy times.

Why It’s Useful:

  • Flexible Workload: Sometimes your app has few requests, other times it’s flooded. This pattern helps your app handle this ups and downs smoothly.
  • Reliability: If a friend (consumer) gets tired or takes a break, others can continue the work. No tasks are lost, and all requests are handled.
  • Better Availability: By using this pattern, your app doesn’t slow down or crash when many requests arrive suddenly. The message queue acts like a buffer, spreading the work out.

Important Things to Think About:

  • No Fixed Order: Requests might be done in a different order than they arrive. Make sure your tasks can handle this randomness.
  • Handling Failures: If a friend (consumer) can’t finish a task, you want to make sure it doesn’t get lost. The system should know when a task can’t be done and handle it.
  • Sharing Results: If your friends (consumers) need to tell you something after they’re done, like “Task A is complete,” they should leave a note somewhere everyone can see.

When to Use It:

  • Use this when your app needs to process lots of tasks, like orders or messages, and they can be done independently.
  • Perfect when the number of tasks changes a lot. It helps manage busy and slow times smoothly.

When It Might Not Fit:

  • If your tasks are super dependent on each other and have to be done in a specific order, this pattern might not work well.

By using threads, the Competing Consumers Pattern is emulated within a single application. Threads represent different consumers that work on tasks concurrently. Tasks are distributed through a shared queue, mirroring the message queue. Threads continuously retrieve tasks, process them, and mark them as done. This parallel processing boosts efficiency.

While threads offer a simplified approach, real-world scenarios might employ separate processes or machines for true scalability and isolation. In a real-world scenario, you might use a dedicated message queue service like RabbitMQ or Azure Service Bus.

03. Queue-Based Load Leveling Pattern

Figure 5 — Queue-based load leveling pattern

Before you get confused by the similarity between Figure 4 and Figure 5, let me start by mentioning the differences between the above two design patterns. Competing Consumers is primarily about parallelizing work distribution among multiple consumers to improve processing efficiency and throughput. It focuses on efficient task distribution and parallel processing in scenarios with a high volume of tasks. However, Queue-Based Load Leveling is primarily about managing unpredictable bursts of requests and ensuring service availability and responsiveness. It focuses on protecting services from overload during unpredictable demand spikes.

In cloud environments, tasks often need to call upon various services. When these services face intense demand, it can lead to performance and reliability issues. This can affect not just components within the same solution but also third-party services that provide crucial resources. For this type of scenario, this is a smart solution that acts as a buffer between tasks and services, ensuring a smoother and more reliable operation.

Imagine a small ice cream shop with one server. On regular days, customers arrive steadily. But on a scorching summer day, a large group rushes in at once, overwhelming the server. This unpredicted surge in requests for ice cream causes delays, and some customers leave disappointed. Just like this, in cloud computing, services can face sudden spikes in demand, making it challenging to maintain availability and responsiveness. The Queue-Based Load Leveling pattern acts as a virtual queue, helping smooth these bursts in requests, similar to customers waiting their turn for ice cream.

Now let’s have an idea about how this works.

  1. Asynchronous Operation: Tasks and services run independently and asynchronously. When a task needs a service, it doesn’t call it directly; instead, it posts a message containing the necessary data to the queue.
  2. Queue as Buffer: The queue stores these messages until the service is ready to retrieve and process them. Multiple tasks can send their requests through the same queue, even at varying rates.

This approach decouples tasks from services, allowing the service to work at its own pace, regardless of how many requests are in the queue. Even if the service isn’t available when a task posts a message, there’s no delay for the task.

The Queue-Based Load Leveling pattern offers several advantages:

  1. Maximized Availability: Delays or issues in services won’t immediately affect the application since it can continue to post messages to the queue, even when the service is unavailable.
  2. Enhanced Scalability: You can easily adjust the number of queues and service instances to meet fluctuating demand, ensuring scalability when needed.
  3. Cost Control: Deploy only enough service instances to handle the average load, avoiding the need to provision for peak loads, which can save on costs.

This pattern is ideal for applications that rely on services susceptible to overloading. However, it may not be suitable for applications that require near-instantaneous responses from services.

04. Priority Queue Pattern

Figure 6 — Priority queue pattern

Imagine managing a busy email support center. You have regular and premium customers. Premium customers pay for faster service. Emails arrive in your inbox. Using the Priority Queue pattern, you label premium customer emails as “High Priority” and regular customer emails as “Low Priority.” You tackle high-priority emails first, ensuring premium customers get quicker responses while still addressing regular customer inquiries. With the consideration of the above example, imagine you’re running a cloud-based system, and you want to make sure that certain tasks get processed faster than others based on their importance. This is where the “Priority Queue” pattern comes into play.

Typically, messages in a queue are processed in the order they were received, following a “first-in, first-out” (FIFO) approach. But what if some tasks need special treatment because they are more critical or urgent? How can you ensure that high-priority tasks are handled promptly, while still processing lower-priority tasks?

The Priority Queue pattern provides a solution to this challenge. It allows you to prioritize tasks so that those with higher priority are received and processed faster than lower-priority tasks. Here’s how it works:

Priority Messaging: Some message queues support priority messaging. This means that when a message is sent to the queue, it can be assigned a priority level. The queue then automatically reorders the messages so that those with higher priorities are received and processed ahead of lower-priority ones.

Using the Priority Queue pattern offers several benefits:

  1. Business Requirements: It allows you to meet business requirements where tasks must be prioritized based on their importance, such as providing different service levels to specific groups of customers. For example, you can ensure premium customers get faster service.
  2. Cost Control: With the ability to allocate more resources to higher-priority tasks, you can optimize resource usage and minimize operational costs.
  3. Performance and Scalability: The pattern can maximize application performance by assigning more resources to high-priority tasks and can help handle scalability more efficiently.

I hope this article helps you to understand cloud design patterns better. Let’s meet again with part 2 of this article series. If you wish to learn more, you can visit this site.

--

--

Waruni Lalendra

Software Engineering undergraduate at University of Kelaniya Sri Lanka