Worker vs Responder in Cloud

Tagged as: cloud

We use clusters on cloud — In most simplistic term it is a group of servers with a mechanism to load distribute and balance. And this gives the main power of cloud — scale based on need. Servers are added when there is a need and removed when unwanted.

Worker vs Responder

There is a pattern in this. On a very high level you can classify these clusters into two types — Responder vs Worker

  1. Responding cluster (Typical Application servers) — Waits for requests from the clients and responds with the needed results. Many of the web frameworks (Spring, Flask, Rails etc) play the role here. A typical example is loading a web page or calling a web API. If you are using Amazon Web-services (AWS), Elastic Beanstalk (EC2 plus ELB) is a good place to build this.

  2. Worker cluster — This does a potentially long running asynchronous task and marks the results in some persistent store (more generally a Database). One example is a video trans-coding job. We cant handle this in the other type of cluster due to this being a task which would take long time — sometimes hours. The critical component needed here is a Queue, where you insert the tasks (may be from a Web request — Responding cluster). The worker picks the job from the queue, executes, records the results say in a database and closes the job. Since there are potentially multiple servers working on the jobs in the Queue, it should be a distributed Queue. The queue should have capabilities to ensure no job is being processed by more than one server, and also handle cases when one server goes down (dies) half way through a job. In AWS, you can use OpsWorks to realize such a cluster and AWS SQS could be the Queue.

I felt, most of the cloud applications fit roughly in the above categories. Encouraging readers to suggest other patterns if they know of any, I will be glad to include those.

Tagged as: cloud
comments powered by Disqus