- Mastering Spring Cloud
- Piotr Minkowski
- 413字
- 2021-08-27 18:57:01
Distributed tracing with Sleuth
Another one of Spring Cloud's essential functionalities is distributed tracing. It is implemented in the Spring Cloud Sleuth library. Its primary purpose is to associate subsequent requests dispatched between different microservices under processing single input request. As in most cases, these are HTTP requests that implement tracing mechanisms based on HTTP headers. The implementation is built over Slf4j and MDC. Slf4j provides facade and abstraction for specific logging frameworks such as logback, log4j, or java.util.logging. MDC or mapped diagnostic context in full, is a solution for distinguishing log output from different sources and enriching them with additional information that could be not available in the actual scope.
Spring Cloud Sleuth adds trace and span IDs to the Slf4J MDC, so that we are able to extract all of the logs with a given trace or span. It also adds some other entries such as application name or exportable flag. It integrates with the most popular messaging solutions such as Spring REST template, Feign client, Zuul filters, Hystrix, or Spring Integration message channels. It can also be used together with RxJava or scheduled tasks. To enable it in your project you should add the spring-cloud-starter-sleuth dependency. The usage of basic span and trace IDs mechanisms is completely transparent for a developer.
Adding tracing headers is not the only feature of Spring Cloud Sleuth. It is also responsible for recording timing information, which is useful in latency analysis. Such statistics can be exported to Zipkin, a tool that can be used for querying and visualization timing data.
Frequently, there is no need to analyze everything; the input traffic volume is so high that we would need to collect only a certain percentage of data. For that purpose, Spring Cloud Sleuth provides a sampling policy, where we can decide how much input traffic is sent to Zipkin. The second smart solution to the big data problem is to send statistics using the message broker instead of the default HTTP endpoint. To enable this feature we have to include the spring-cloud-sleuth-stream dependency, which allows your application to become a producer of messages sent to Apache Kafka or RabbitMQ.