Skip to main content

Monitoring Spring WebFlux Microservices with New Relic

 In this guide, we’ll go over how to monitor a reactive Spring Boot application using WebFlux, with New Relic’s @Trace annotation for detailed transaction tracking, custom parameters, and distributed tracing for complex service chains.

Prerequisites

  1. A Spring Boot WebFlux application.
  2. The New Relic Java agent configured in your application.
  3. Enable distributed tracing in newrelic.yml:
distributed_tracing:
  enabled: true

Step 1: Instrument the Main Endpoint

Our main entry point is the processRequest endpoint, which handles validation, external API calls, and data processing. Here’s how we add @Trace with dispatcher = true to make it a main transaction.



@RestController
public class SampleController {

    private static final Logger logger = LoggerFactory.getLogger(SampleController.class);
    private final SampleService sampleService;
    private final WebClient webClient;

    public SampleController(SampleService sampleService, WebClient.Builder webClientBuilder) {
        this.sampleService = sampleService;
        this.webClient = webClientBuilder.baseUrl("https://jsonplaceholder.typicode.com/todos/1").build();
    }

    @GetMapping("/process")
    @Trace(dispatcher = true, metricName = "Custom/processRequest")
    public Mono<String> processRequest(@RequestParam String input) {
        NewRelic.addCustomParameter("inputParam", input);
        logger.info("Starting process for Trace ID: {}", NewRelic.getAgent().getTransaction().getTraceId());

        return sampleService.validateInput(input)
                .flatMap(validatedInput -> callExternalService(validatedInput))
                .flatMap(externalData -> sampleService.processData(input, externalData))
                .doOnSuccess(result -> logger.debug("Processing complete with result: {}", result))
                .doOnError(e -> logger.error("Error processing request", e));
    }

    @Trace(async = true, metricName = "Custom/callExternalService")
    private Mono<String> callExternalService(String validatedInput) {
        NewRelic.addCustomParameter("validatedInput", validatedInput);
        return webClient.get()
                .retrieve()
                .bodyToMono(String.class)
                .doOnNext(data -> NewRelic.addCustomParameter("externalData", data))
                .doOnError(e -> logger.error("Failed to call external service", e));
    }
}


Step 2: Instrument Sub-Methods for Traceability

Here, we add @Trace on the validateInput and processData methods. This allows each method to show up as segments within the main transaction trace. Setting async = true supports WebFlux’s non-blocking nature.

@Service
public class SampleService {

    @Trace(async = true, metricName = "Custom/validateInput")
    public Mono<String> validateInput(String input) {
        NewRelic.addCustomParameter("validatedInput", input);
        return Mono.just("Validated: " + input);
    }

    @Trace(async = true, metricName = "Custom/processData")
    public Mono<String> processData(String input, String externalData) {
        NewRelic.addCustomParameter("combinedData", input + " " + externalData);
        return Mono.just("Processed Data: " + input + " at " + LocalTime.now());
    }
}

Step 3: Viewing Results in New Relic

Main Transaction: In New Relic APM under Transactions, locate Custom/processRequest to analyze the main endpoint’s performance.

Sub-Method Tracking: Under Distributed Traces or Transaction Traces, view the breakdown of validateInput, callExternalService, and processData as segments within each transaction.

Custom Parameters: View specific data by setting custom parameters (inputParam, validatedInput, etc.), which appear in Transaction Attributes and allow deeper filtering in NRQL queries.

Step 4: Using NRQL to Query Performance Data

With New Relic’s NRQL, query average durations and segment-specific data:

Average Duration of Validation:

NRQL

SELECT average(duration) 
FROM Span 
WHERE name = 'Custom/validateInput' 
SINCE 1 week ago
External Call Monitoring:



SELECT average(duration) 
FROM Span 
WHERE name = 'Custom/callExternalService' 
SINCE 1 week ago
Overall Transaction Time:



SELECT average(duration) 
FROM Transaction 
WHERE name = 'Custom/processRequest' 
SINCE 1 week ago


Ref doc for Newrelic account setup & Integration 

https://docs.newrelic.com/install/java/?deployment=gradle&framework=springboot

Let me know if you need a codebase as well.

Comments

Popular posts from this blog

Mastering Java Logging: A Guide to Debug, Info, Warn, and Error Levels

Comprehensive Guide to Java Logging Levels: Trace, Debug, Info, Warn, Error, and Fatal Comprehensive Guide to Java Logging Levels: Trace, Debug, Info, Warn, Error, and Fatal Logging is an essential aspect of application development and maintenance. It helps developers track application behavior and troubleshoot issues effectively. Java provides various logging levels to categorize messages based on their severity and purpose. This article covers all major logging levels: Trace , Debug , Info , Warn , Error , and Fatal , along with how these levels impact log printing. 1. Trace The Trace level is the most detailed logging level. It is typically used for granular debugging, such as tracking every method call or step in a complex computation. Use this level sparingly, as it can generate a large volume of log data. 2. Debug The Debug level provides detailed information useful during dev...

Choosing Between Envoy and NGINX Ingress Controllers for Kubernetes

As Kubernetes has become the standard for deploying containerized applications, ingress controllers play a critical role in managing how external traffic is routed to services within the cluster. Envoy and NGINX are two of the most popular options for ingress controllers, and each has its strengths, weaknesses, and ideal use cases. In this blog, we’ll explore: How both ingress controllers work. A detailed comparison of their features. When to use Envoy vs. NGINX for ingress management. What is an Ingress Controller? An ingress controller is a specialized load balancer that: Manages incoming HTTP/HTTPS traffic. Routes traffic to appropriate services based on rules defined in Kubernetes ingress resources. Provides features like TLS termination, path-based routing, and host-based routing. How Envoy Ingress Controller Works Envoy , initially built by Lyft, is a high-performance, modern service proxy and ingress solution. Here's how it operates in Kubernetes: Ingress Resource : You d...

Distributed Transactions in Microservices: Implementing the Saga Pattern

Managing distributed transactions is one of the most critical challenges in microservices architecture. Since microservices operate with decentralized data storage, traditional ACID transactions across services are not feasible. The Saga Pattern is a proven solution for ensuring data consistency in such distributed systems. In this blog, we’ll discuss: What is the Saga Pattern? Types of Saga Patterns : Orchestration vs. Choreography How to Choose Between Them Implementing Orchestration-Based Saga with Spring Boot An Approach to Event-Driven Saga with Kafka 1. What is the Saga Pattern? The Saga Pattern breaks a long-running distributed transaction into a series of smaller atomic transactions , each managed by a microservice. If any step fails, compensating actions are performed to roll back the preceding operations. Example: In an e-commerce system , a customer places an order: Payment is processed. Inventory is reserved. Shipping is scheduled. If inventory reservation fails, the paym...