NGINX.COM

We’ve been talking a lot about why organizations should adopt microservices and use a four-tier architecture when building applications and websites. Microservices enable architects, developers, and engineers to keep pace with the demand for new app functionality and better performance across distributed experiences and devices. They provide technology that is independent, flexible, resilient, easy to deploy, organizationally aligned, and easily composed.

Readings in Design, Development, and Adoption of Microservices

Before we begin talking about implementation of a microservices architecture, I’d like to share some reference books that I’ve found to be helpful. Although these books aren’t specifically about “microservices,” they explain the design and development processes that are core components of a microservices architecture and approach to modern application development.       

  • REST in Practice: Hypermedia and Systems Architecture by Jim Webber, Savas Parastatidis, and Ian Robinson

    This book explains and demonstrates how to use a REST API system to create elegant and simple distributed systems. Specifically, it provides examples, techniques, and best practices to solve infrastructure challenges as companies expand and grow rapidly.

  • Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions by Gregor Hohpe and Bobby Woolf

    This book explores best practices for planning and designing systems to deploy and continuously integrate applications. The authors use a technical vocabulary and visual notation framework to describe large-scale integration solutions across many technologies including JMS, MSMQ, TIBCO ActiveEnterprise, Microsoft BizTalk, SOAP, and XSL.

  • The Modern Firm: Organizational Design for Performance and Growth by John Roberts

    This book differs from the others in focusing on the business and team structures that are best suited for a microservices-oriented application development process. It explores routines, processes, and corporate cultures that contribute to performance and growth.

  • Release It! Design and Deploy Production-Ready Software by Michael T. Nygard

    One of the largest challenges companies have is that they wait too long to deploy new features or products. This book explains the concept of releasing new code and design once it’s production-ready through the use of modern best practices like microservices and continuous integration.

Microservices Processes and Tools

Your core microservices are only part of a complete application development and delivery architecture. You also need to choose tools for inter-service communication, traffic monitoring, failure detection, and other functions. Here are some types of software and specific tools that can help you transition to a microservices implementation.

Open Source Software

If you’re building microservices based applications, you will find that much of the best code is open source. Much of this code was written, or has significant extensions or contributions from  companies with top-notch technical talent, like Google, LinkedIn, Netflix, and Twitter. Because of the nature of these companies, these projects are usually built with scalability and extensibility in mind. All of this makes the software development landscape very different from ten or fifteen years ago, when you needed a big team and lots of money just to buy the software, let alone the hardware. There’s no long procurement cycle or waiting for vendors to incorporate the features you need. You can change and extend the software yourself.

External and Internal Communication Protocols

You’re going to build many microservices with APIs, so you need to consider from the start how the APIs are going to be consumed. When accessing edge and public services, people often use a browser, which can accept JSON and use JavaScript or other languages such as Python to consume and interact with your APIs. XML can take the place of JSON, but is more difficult to process and thus heavier weight. In any case, for edge and public services you want a stable API that has the communication protocol in the object.

For high-speed communication between microservices within the context of an application, however, neither JSON nor XML is efficient enough. Here you want more compact binary-encoded data. Commonly used tools include Protocol Buffers from Google, and Thrift and Avro from Apache. A newer protocol is Simple Binary Encoding from Real Logic Limited, which is reportedly about 20 times faster than Protocol Buffers.

Using binary encoding does mean that there has to be a library for consuming the microservice API. You might not want to write the library yourself, because you feel the API is already self-describing. The danger, though, is that the person who steps in and writes it (say, a developer of a consuming application) usually doesn’t understand the microservice as well as you and is less likely to get things like error handling correct. You end up being encouraged to adopt and maintain a library that you didn’t write.

Data Storage

If you currently have a monolithic data store behind your applications, you need to split it up (refactor it) as you transition to microservices. One source of guidance is Refactoring Databases: Evolutionary Database Design by Scott W. Ambler and Pramod J. Sadalage. You can use SchemaSpy to analyze your schemata and tease them apart. Your goal is to determine for each microservice one at a time the materialized views of the table that the microservice needs, and transfer them from the combined database into microservice-specific data stores. This isn’t always difficult as you might anticipate because often a monolithic database turns out to be a collection of distinct data sets, each of them accessed by just one service. In this case, it’s pretty easy to split up the database and you can do so incrementally.

You can also break up a database gradually over time, which is referred to as polyglot persistence. One tool for this is a Netflix OSS project called staash (a pun on STaaS, storage tier as a service). It’s a Java app that provides a RESTful API to the front end and talks to both Cassandra and mySQL databases. So you can interpose it as a standard prototype library for developing data access layers. You can add a new database into the back end and new HTTP into the front end, with staash as a single package that already incorporates the mySQL and Cassandra functionality and all the necessary glue.

If you’re concerned about the issue of consistency across your distributed data stores, learn about Kyle Kingsbury’s Jepsen tests, which are becoming the standard way to test how well distributed systems react to network partitions. Most distributed databases fail the tests, and interesting bugs are exposed. The tests can help you identify and eliminate practices that are common but not really correct.

Monitoring

Monitoring microservices deployment is difficult because the set of services changes so rapidly. There are constantly new services being added, new metrics to collect, new versions being deployed, scaling up and down of service instances, and so on. In such an environment, there’s not enough baseline data for an automated threshold analysis tool to learn what “normal” traffic looks like. The tool tends to generate lots of false alarms, particularly when you start up a new microservice that it’s never seen before. The challenge is to build systems that react appropriately to status changes in an environment where changes are so frequent that everything looks unusual all the time.

A microservices architecture also involves complex patterns of remote calls as the services communicate. That makes end-to-end tracking of request flows more difficult, but the tracking data is vital to diagnosing problems. You need to be able to trace how process A called B, which called C, and so on. One way to do this is to instrument HTTP headers with globally unique identifiers (GUIDs) and transaction IDs.

Continuous Delivery and DevOps

In a microservices architecture, you’re deploying small software changes very frequently. The changes that are most likely to break the system don’t involve deploying new code, but rather involve switching on a feature for all clients instead of the limited number who were using it during testing. For example, if a feature causes small performance degradation you might not notice ill effects during the test, but multiplying the slight delay by all clients suddenly brings the system down. To deal with a situation like this, you must very quickly both detect the problem and roll back to the previous configuration.

To detect problems quickly, health checks can run 5 to 10 seconds, not every 1 to 5 minutes as is common. At a frequency of once per minute, it might take 5 minutes before it becomes clear that the change you’re seeing in a metric really indicates a problem. Another reason to take frequent measurements is that most people have a short attention span (the amount of time they attend to a new stimulus before getting distracted). According to recent research, the average person’s attention span before getting distracted is 8 seconds, down from 12 seconds in 2000. The point is that for people to respond to an event in a timely way, the delay between the event occurring and being reported needs to be shorter than the average attention span. The upside of short attention spans, on the other hand, is that 10-second outages are less likely to be noticed by users either.

To make it easy to roll back to a working configuration, log the previous configuration in a central location before enabling the feature. This way anyone can revert to the working code if need be.

Conclusion

Adopting microservices requires some large changes in your code base, as well as the culture and structure of your organization. In this post I’ve shared some suggestions and best practices for external and internal communication protocols, open source software, data storage, monitoring, and continuous delivery and devops. Now is the time to begin the transition to a microservices architecture if you haven’t already started. Remember, the transition can be done incrementally and doesn’t have to happen overnight.

 

Hero image
Kubernetes:
从测试到生产

通过多种流量管理工具提升弹性、可视性和安全性

关于作者

Patrick Nommensen

Demand Generation Manager

关于 F5 NGINX

F5, Inc. 是备受欢迎的开源软件 NGINX 背后的商业公司。我们为现代应用的开发和交付提供一整套技术。我们的联合解决方案弥合了 NetOps 和 DevOps 之间的横沟,提供从代码到用户的多云应用服务。访问 nginx-cn.net 了解更多相关信息。