Multiple executables in single microservice
We have a microservices architecture very similar to the architecture described here
Obviously, it's a simplified diagram of a real system. In our case, we have a requirement to perform background operations in a service in addition to exposing APIs. e.g.
- In the Catalog service, if a certain product's count goes below a threshold, a notification needs to be sent to fulfillment team
- In the Customer service, if a customer has not logged in for x days (and opted for promos), a promo email needs to be sent
Now my question is: should these background jobs be their own microservices (with own executable and own database) or will be another executable in the same microservice? For example with the latter approach, Customer service will comprise of CustomerAPI and CustomerPromoJob and share the same database. Is that an anti-pattern as all the applications in the Customer service will have to be deployed at the same time?
architecture domain-driven-design microservices distributed-system
add a comment |
We have a microservices architecture very similar to the architecture described here
Obviously, it's a simplified diagram of a real system. In our case, we have a requirement to perform background operations in a service in addition to exposing APIs. e.g.
- In the Catalog service, if a certain product's count goes below a threshold, a notification needs to be sent to fulfillment team
- In the Customer service, if a customer has not logged in for x days (and opted for promos), a promo email needs to be sent
Now my question is: should these background jobs be their own microservices (with own executable and own database) or will be another executable in the same microservice? For example with the latter approach, Customer service will comprise of CustomerAPI and CustomerPromoJob and share the same database. Is that an anti-pattern as all the applications in the Customer service will have to be deployed at the same time?
architecture domain-driven-design microservices distributed-system
add a comment |
We have a microservices architecture very similar to the architecture described here
Obviously, it's a simplified diagram of a real system. In our case, we have a requirement to perform background operations in a service in addition to exposing APIs. e.g.
- In the Catalog service, if a certain product's count goes below a threshold, a notification needs to be sent to fulfillment team
- In the Customer service, if a customer has not logged in for x days (and opted for promos), a promo email needs to be sent
Now my question is: should these background jobs be their own microservices (with own executable and own database) or will be another executable in the same microservice? For example with the latter approach, Customer service will comprise of CustomerAPI and CustomerPromoJob and share the same database. Is that an anti-pattern as all the applications in the Customer service will have to be deployed at the same time?
architecture domain-driven-design microservices distributed-system
We have a microservices architecture very similar to the architecture described here
Obviously, it's a simplified diagram of a real system. In our case, we have a requirement to perform background operations in a service in addition to exposing APIs. e.g.
- In the Catalog service, if a certain product's count goes below a threshold, a notification needs to be sent to fulfillment team
- In the Customer service, if a customer has not logged in for x days (and opted for promos), a promo email needs to be sent
Now my question is: should these background jobs be their own microservices (with own executable and own database) or will be another executable in the same microservice? For example with the latter approach, Customer service will comprise of CustomerAPI and CustomerPromoJob and share the same database. Is that an anti-pattern as all the applications in the Customer service will have to be deployed at the same time?
architecture domain-driven-design microservices distributed-system
architecture domain-driven-design microservices distributed-system
edited yesterday
asked yesterday
ubi
929725
929725
add a comment |
add a comment |
3 Answers
3
active
oldest
votes
In order to be able to scale each functionality by itself, it is good practice that you separate the background jobs from the exposed APIs. If you don't, you may for example have much more resources allocated to the background jobs that you need because you needed to scale the APIs.
New contributor
add a comment |
It is the ubiquitous language that makes those objects explicit in every subdomain, and it is this language that defines the bounded_context/microservice boundary and so its size. My point is that an object representing an actor that triggers emails to customers given a certain condition is identified within the customer subdomain and so it is where it should belong. And the same concepts applies for the crons you of your Catalog subdomain.
Other than that, you must have an Emailing library (shared kernel) in a commons project that could be used by every subdomain, but this library should know nothing about emails content. Content is specific for each subdomain (customers MS send promo email using this commons library). Same for jobs, you could have a crons library in your commons project used as a dependency from all your MSs, but you should define/specialize specific business jobs in each subdomain.
add a comment |
I'm assuming that the same code base/repo holds the code for both the API and the background process - that is why it is convenient to deploy both together and having both share the same database.
There should be a good reason to separate the code base, or the database they use. here are some good reasons:
- If you roll out changes in one more frequently
- they need separate views in the database
- they have different requirements of database availability
Scaling is not a good reason to divide the repos or the database, in my opinion and will only cause duplicity of code and opens a breach for data inconsistency and bugs. Scaling can be done by adding more processes of one vs the other -
You haven't mentioned how you are deploying, but an example could be having different kubernetes deployments with different replica sets and scale, but have the same docker image, that has two different entrypoints.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53943354%2fmultiple-executables-in-single-microservice%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
In order to be able to scale each functionality by itself, it is good practice that you separate the background jobs from the exposed APIs. If you don't, you may for example have much more resources allocated to the background jobs that you need because you needed to scale the APIs.
New contributor
add a comment |
In order to be able to scale each functionality by itself, it is good practice that you separate the background jobs from the exposed APIs. If you don't, you may for example have much more resources allocated to the background jobs that you need because you needed to scale the APIs.
New contributor
add a comment |
In order to be able to scale each functionality by itself, it is good practice that you separate the background jobs from the exposed APIs. If you don't, you may for example have much more resources allocated to the background jobs that you need because you needed to scale the APIs.
New contributor
In order to be able to scale each functionality by itself, it is good practice that you separate the background jobs from the exposed APIs. If you don't, you may for example have much more resources allocated to the background jobs that you need because you needed to scale the APIs.
New contributor
New contributor
answered yesterday
gbandres
3689
3689
New contributor
New contributor
add a comment |
add a comment |
It is the ubiquitous language that makes those objects explicit in every subdomain, and it is this language that defines the bounded_context/microservice boundary and so its size. My point is that an object representing an actor that triggers emails to customers given a certain condition is identified within the customer subdomain and so it is where it should belong. And the same concepts applies for the crons you of your Catalog subdomain.
Other than that, you must have an Emailing library (shared kernel) in a commons project that could be used by every subdomain, but this library should know nothing about emails content. Content is specific for each subdomain (customers MS send promo email using this commons library). Same for jobs, you could have a crons library in your commons project used as a dependency from all your MSs, but you should define/specialize specific business jobs in each subdomain.
add a comment |
It is the ubiquitous language that makes those objects explicit in every subdomain, and it is this language that defines the bounded_context/microservice boundary and so its size. My point is that an object representing an actor that triggers emails to customers given a certain condition is identified within the customer subdomain and so it is where it should belong. And the same concepts applies for the crons you of your Catalog subdomain.
Other than that, you must have an Emailing library (shared kernel) in a commons project that could be used by every subdomain, but this library should know nothing about emails content. Content is specific for each subdomain (customers MS send promo email using this commons library). Same for jobs, you could have a crons library in your commons project used as a dependency from all your MSs, but you should define/specialize specific business jobs in each subdomain.
add a comment |
It is the ubiquitous language that makes those objects explicit in every subdomain, and it is this language that defines the bounded_context/microservice boundary and so its size. My point is that an object representing an actor that triggers emails to customers given a certain condition is identified within the customer subdomain and so it is where it should belong. And the same concepts applies for the crons you of your Catalog subdomain.
Other than that, you must have an Emailing library (shared kernel) in a commons project that could be used by every subdomain, but this library should know nothing about emails content. Content is specific for each subdomain (customers MS send promo email using this commons library). Same for jobs, you could have a crons library in your commons project used as a dependency from all your MSs, but you should define/specialize specific business jobs in each subdomain.
It is the ubiquitous language that makes those objects explicit in every subdomain, and it is this language that defines the bounded_context/microservice boundary and so its size. My point is that an object representing an actor that triggers emails to customers given a certain condition is identified within the customer subdomain and so it is where it should belong. And the same concepts applies for the crons you of your Catalog subdomain.
Other than that, you must have an Emailing library (shared kernel) in a commons project that could be used by every subdomain, but this library should know nothing about emails content. Content is specific for each subdomain (customers MS send promo email using this commons library). Same for jobs, you could have a crons library in your commons project used as a dependency from all your MSs, but you should define/specialize specific business jobs in each subdomain.
answered yesterday
Sebastian Oliveri
39717
39717
add a comment |
add a comment |
I'm assuming that the same code base/repo holds the code for both the API and the background process - that is why it is convenient to deploy both together and having both share the same database.
There should be a good reason to separate the code base, or the database they use. here are some good reasons:
- If you roll out changes in one more frequently
- they need separate views in the database
- they have different requirements of database availability
Scaling is not a good reason to divide the repos or the database, in my opinion and will only cause duplicity of code and opens a breach for data inconsistency and bugs. Scaling can be done by adding more processes of one vs the other -
You haven't mentioned how you are deploying, but an example could be having different kubernetes deployments with different replica sets and scale, but have the same docker image, that has two different entrypoints.
add a comment |
I'm assuming that the same code base/repo holds the code for both the API and the background process - that is why it is convenient to deploy both together and having both share the same database.
There should be a good reason to separate the code base, or the database they use. here are some good reasons:
- If you roll out changes in one more frequently
- they need separate views in the database
- they have different requirements of database availability
Scaling is not a good reason to divide the repos or the database, in my opinion and will only cause duplicity of code and opens a breach for data inconsistency and bugs. Scaling can be done by adding more processes of one vs the other -
You haven't mentioned how you are deploying, but an example could be having different kubernetes deployments with different replica sets and scale, but have the same docker image, that has two different entrypoints.
add a comment |
I'm assuming that the same code base/repo holds the code for both the API and the background process - that is why it is convenient to deploy both together and having both share the same database.
There should be a good reason to separate the code base, or the database they use. here are some good reasons:
- If you roll out changes in one more frequently
- they need separate views in the database
- they have different requirements of database availability
Scaling is not a good reason to divide the repos or the database, in my opinion and will only cause duplicity of code and opens a breach for data inconsistency and bugs. Scaling can be done by adding more processes of one vs the other -
You haven't mentioned how you are deploying, but an example could be having different kubernetes deployments with different replica sets and scale, but have the same docker image, that has two different entrypoints.
I'm assuming that the same code base/repo holds the code for both the API and the background process - that is why it is convenient to deploy both together and having both share the same database.
There should be a good reason to separate the code base, or the database they use. here are some good reasons:
- If you roll out changes in one more frequently
- they need separate views in the database
- they have different requirements of database availability
Scaling is not a good reason to divide the repos or the database, in my opinion and will only cause duplicity of code and opens a breach for data inconsistency and bugs. Scaling can be done by adding more processes of one vs the other -
You haven't mentioned how you are deploying, but an example could be having different kubernetes deployments with different replica sets and scale, but have the same docker image, that has two different entrypoints.
answered 9 hours ago
NiRR
1,86611943
1,86611943
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53943354%2fmultiple-executables-in-single-microservice%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown