• 3
name

A PHP Error was encountered

Severity: Notice

Message: Undefined index: userid

Filename: views/question.php

Line Number: 191

Backtrace:

File: /home/prodcxja/public_html/questions/application/views/question.php
Line: 191
Function: _error_handler

File: /home/prodcxja/public_html/questions/application/controllers/Questions.php
Line: 433
Function: view

File: /home/prodcxja/public_html/questions/index.php
Line: 315
Function: require_once

I'm trying wrap my head around Docker to architect a simple swarm that will eventually be deployed to AWS EC2 Container service.

My task is to take different kinds of jobs in an SQS queue and process them based on some JSON with { type = "<TYPE_NAME>"}.

My initial thoughts on this are as follows:

Each type of job will get its own container

This is useful because some of my jobs are python scripts, others require C++-compiled programs, still others require specialized environments. The rest are just file operations.

One container to rule them all

One container will control the rest, reading from the SQS queue, determining what kind of job it is and

Question time

I've successfully gotten all these individual containers basically built. Now I'm trying to figure out how to get them to talk to each other.

How should I think about passing a job off from the master container to the children? Should it be via API call? Do I need a listener service on each container attached to a port waiting for a signal or can I just execute code directly from the master instance with a shared file system?

It's all very new.....

I do not have enough reputations to comment, so I post it as an answer.

I've successfully gotten all these individual containers basically built. Now I'm trying to figure out how to get them to talk to each other.

You might have learnt that there are some ways of doing this.

(a) You could use docker-compose and then use "links", in this case, the hostnames will be accessible towards the linked containters

(b) You can use docker network create a network and start all containers in the same network by --network

There might be better ways as well, these are simpler.

How should I think about passing a job off from the master container to the children?

I'd say the choice is yours. Using listener with port would be simpler than mounting disks as the later would be difficult to scale and difficult to track. Atleast port forwardings can be seen with docker ps.

The question has a very widened scope to answer :)

  • 0
Reply Report

Trending Tags