Skip to main content

5 questions everyone should ask about microservices

Thinking about migrating your monolithic applications to microservices? Considering these questions sooner rather than later will lead to smarter decisions.
Image
Microscope

One of the joys of my role is that I’m often talking to customers (or potential customers) about their concerns, their plans, and their strategies for moving their business forward based on open source technologies.

When discussing the application development impact on existing developed applications and transitioning to microservices, there are five questions that keep popping up in one form or another. They are the same regardless of the size of the organization, and often become part of strategy discussions later in the process, as organizations move towards microservice architectures.

This article touches on the five questions, in no particular order, that everyone should ask about microservices. It’s based on interactions with organizations in the process of conquering microservices for existing development, and for delivering modern applications.

1. What is the performance impact of microservices?

This first question comes up every time, usually once the initial strategy planning is completed and a start is made on migrating an existing (monolithic) application:

"How to approach the performance impact in communications when a monolith gets split up into distributed services (microservices), such as from internal calls to distributed rest APIs?"

The basis of the question is uncertainty in what’s going to happen once they start decomposing existing monolithic applications in favor of microservices where possible. What we need to understand is that the goal of splitting out these services is to favor deployment speed over API invocation speed.

The main reason to split off microservices out of an existing monolith should be to isolate the development of the service within a team, completely separate from the application development team. The service engineering team can now operate at their own intervals, deploying changes weekly, daily, or even hourly if a noteworthy Common Vulnerabilities and Exposures (CVE) is applicable.

The penalty for unknown network invocations is the trade-off to your monolith’s highly regimented deployment requirements that cause it to move at two- to three-month deployment intervals. Now, with microservice teams, you can react quicker to the business, competition, and security demands with faster delivery intervals. Equally critical for network invocations is to look closely at how course-grained your network calls become in this new distributed architecture.

2. How do we handle state while splitting up our monoliths?

After the initial discussion around the first question, it’s usually followed by questions around how to deal with state in a monolithic application:

"How to deal with services that get split off from their monolith usage and are stateful, where we want to use the benefits of a container platform like OpenShift?"

There are two types of stateful in relation to applications. The first involves building a business application that uses either in-memory state or database state. The second involves building a specific database engine and needing highly performant, even low-level, access to the underlying disk—it might even extend the kernel operating system itself.

The first option is the mainstream way of developing and deploying cloud-native applications with their stateful components, while still splitting out microservices using a container platform like OpenShift, based on Kubernetes.

Option two is another story and is often best left to specialized vendors while you focus on your domain-specific business value and delivering that to your customers.

3. How do we handle our data with distributed microservices?

State discussions are central to the move to microservices for many developers and architects. Following that train of thought leads to questions around how to create a consistent state view using the data sources currently in their architecture:

"How to deal with databases backing distributed services so that the state is a single state view in the entire system, (along with how to admin and manage this)?"

The best part about this discussion is that a colleague of ours has addressed this quite extensively in a book. Even better, it’s free to download and provides a lot of tips. Another option you can look at might be Debezium for smart database change data capturing.

4. How do we test stateful (micro)services?

Eventually, one gets to the point of testing all these new fancy distributed microservices, and the datasource state information spread across the application landscape. This fact often leads to the question:

"How can the state of a main application be transferred from production to a test environment when there are many data sources tied to stateful services in production?"

With each microservice operating as if it’s part of another business—one you are having a business-to-business relationship with—the team maintaining the microserivce are forced to maintain a strong API and have to maintain their own test suite for each one.

Remember, a good microservice should be a black box. Few business partners (in this case, internal) rarely have all the data. They only have the data they need for testing purposes. Testing tends to happen more in production in this new world, which is why we see the value of a service mesh technology such as Istio.

It’s possible to use Debezium to replicate data through a common Kafka backbone, anonymize it, and drop it to the locations needed by various independent microservice development teams (the business partners internal to your organization).

5. Should we use API management or a service mesh?

Finally, the last question centers around a bit of confusion on what the roles are for a service mesh and an API gateway:

"How would an API gateway such as 3Scale / Service Mesh be used to migrate applications to a more modern way of working?"

First off, Red Hat 3Scale API Management is a technology offered by Red Hat as a supported product. Its focus is very different from a service mesh technology such as Istio.

As mentioned previously, microservice development teams should function independently much like a business-to-business partner. Their API is the front to their microservice, and with API management tooling, they’re able to publish their microservice API in your managed API layer for consumption. This service mesh technology is concerned with microservices being able to communicate with each other, and things like discovery, load balancing, failure recovery, metrics, and monitoring. It solves the intra-service challenge that distributed services encounter, and does it in a novel way.

Are microservices in your future?

Now you’ve seen the five questions that many are asking out there in the wild, most likely some of the same questions you’ve been wrestling with in your organization. Since they’re based on interactions with organizations in the process of modernizing their service layers, these questions help them with their transition toward using modern architectures to deliver applications to their customers.

These insights should help you make good decisions, tackle the complexity of your existing monolithic applications, and move toward a fundamentally sound microservices architecture for years to come.

Topics:   Microservices  
Author’s photo

Eric D. Schabell

Eric is Red Hat’s Global Technology Evangelist and Portfolio Architect Director. He's renowned in the development community as a speaker, lecturer, author and baseball expert. More about me

Try Red Hat Enterprise Linux

Download it at no charge from the Red Hat Developer program.