Skip to main content

Performance Testing a Spring Boot Application with Gatling

In this blog, we’ll explore using Gatling, a powerful load-testing tool, to test a simple Spring Boot application. We'll set up a performance test for a sample REST API endpoint, demonstrate step-by-step how Gatling integrates with the project, and configure a scenario similar to the example discussed earlier.


What is Gatling?

Gatling is a highly performant open-source load-testing tool. It helps simulate high-traffic scenarios for your APIs, ensuring your application can handle the expected (or unexpected) load efficiently.


1. Setting Up the Spring Boot Project

We'll create a Spring Boot REST API with a simple /search endpoint that accepts query parameters: query and category.

@RestController

@RequestMapping("/api")

public class SearchController {


    @GetMapping("/search")

    public ResponseEntity<String> search(

            @RequestParam String query,

            @RequestParam String category) {

        // Simulate a simple search response

        String response = String.format("Searching for '%s' in category '%s'", query, category);

        return ResponseEntity.ok(response);

    }

}


Step 1.2: Add Dependencies in pom.xml

Make sure your project has the following dependencies:

<dependencies>

    <dependency>

        <groupId>org.springframework.boot</groupId>

        <artifactId>spring-boot-starter-web</artifactId>

    </dependency>

</dependencies>


Start the Spring Boot application and ensure the /api/search endpoint is reachable, e.g., http://localhost:8080/api/search?query=phone&category=electronics.


2. Setting Up Gatling

Step 2.1: Add Gatling Plugin to Your Project

If you use Maven, add the Gatling plugin to your pom.xml for performance testing


<build>

    <plugins>

        <plugin>

            <groupId>io.gatling</groupId>

            <artifactId>gatling-maven-plugin</artifactId>

            <version>4.0.0</version>

            <executions>

                <execution>

                    <id>gatling-test</id>

                    <phase>integration-test</phase>

                    <goals>

                        <goal>execute</goal>

                    </goals>

                </execution>

            </executions>

        </plugin>

    </plugins>

</build>


Run the following command to initialize the Gatling directory structure:


mvn gatling:generate

It will create a src/test/scala folder for your simulation scripts.


3. Writing a Gatling Simulation

Create a file SearchSimulation.scala under src/test/scala with the following content:

Step 3.1: Import Gatling Basics


import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._

class SearchSimulation extends Simulation {

  // Base URL for your Spring Boot app
  val httpProtocol = http
    .baseUrl("http://localhost:8080/api") // Change this as per your app
    .acceptHeader("application/json")    // Accept JSON response

  // CSV Feeder to supply data for queries
  val searchFeeder = csv("search_terms_with_categories.csv").circular

  // Scenario definition
  val scn = scenario("Search Simulation")
    .feed(searchFeeder) // Attach the feeder
    .exec(
      http("Search Request")
        .get("/search") // Endpoint path
        .queryParam("query", "#{query}")       // Dynamically inject "query"
        .queryParam("category", "#{category}") // Dynamically inject "category"
        .check(status.is(200)) // Ensure the response is 200 OK
    )

  // Load simulation setup
  setUp(
    scn.inject(
      rampUsers(100).during(30.seconds) // Gradually add 100 users over 30 seconds
    )
  ).protocols(httpProtocol)
}

Step 3.2: Create a CSV Feeder

Create a file search_terms_with_categories.csv under src/test/resources:

query,category

phone,electronics

laptop,computers

book,education

shoes,footwear


This feeder will provide dynamic data for the simulation.

4. Running the Gatling Test

Run the simulation using the following Maven command:

mvn gatling:test


Once the test completes, Gatling generates a detailed HTML report in the target/gatling folder. Open the report to see performance metrics like response time, throughput, and error rates.

5. Key Concepts Explained

  1. Scenario: The scenario in Gatling defines user behavior (e.g., feeding data, sending HTTP requests).

  2. Feeder: Feeds dynamic data (from a CSV, JSON, or database) to requests. In this case, search_terms_with_categories.csv feeds values for query and category.

  3. Load Simulation: The setUp block determines how many users execute the scenario and at what rate (e.g., 100 users ramping up over 30 seconds).

  4. HTTP Protocol: Defined globally using http.baseUrl() for all requests. You can also customize headers, timeouts, etc.







Comments

Popular posts from this blog

Mastering Java Logging: A Guide to Debug, Info, Warn, and Error Levels

Comprehensive Guide to Java Logging Levels: Trace, Debug, Info, Warn, Error, and Fatal Comprehensive Guide to Java Logging Levels: Trace, Debug, Info, Warn, Error, and Fatal Logging is an essential aspect of application development and maintenance. It helps developers track application behavior and troubleshoot issues effectively. Java provides various logging levels to categorize messages based on their severity and purpose. This article covers all major logging levels: Trace , Debug , Info , Warn , Error , and Fatal , along with how these levels impact log printing. 1. Trace The Trace level is the most detailed logging level. It is typically used for granular debugging, such as tracking every method call or step in a complex computation. Use this level sparingly, as it can generate a large volume of log data. 2. Debug The Debug level provides detailed information useful during dev...

Choosing Between Envoy and NGINX Ingress Controllers for Kubernetes

As Kubernetes has become the standard for deploying containerized applications, ingress controllers play a critical role in managing how external traffic is routed to services within the cluster. Envoy and NGINX are two of the most popular options for ingress controllers, and each has its strengths, weaknesses, and ideal use cases. In this blog, we’ll explore: How both ingress controllers work. A detailed comparison of their features. When to use Envoy vs. NGINX for ingress management. What is an Ingress Controller? An ingress controller is a specialized load balancer that: Manages incoming HTTP/HTTPS traffic. Routes traffic to appropriate services based on rules defined in Kubernetes ingress resources. Provides features like TLS termination, path-based routing, and host-based routing. How Envoy Ingress Controller Works Envoy , initially built by Lyft, is a high-performance, modern service proxy and ingress solution. Here's how it operates in Kubernetes: Ingress Resource : You d...

Distributed Transactions in Microservices: Implementing the Saga Pattern

Managing distributed transactions is one of the most critical challenges in microservices architecture. Since microservices operate with decentralized data storage, traditional ACID transactions across services are not feasible. The Saga Pattern is a proven solution for ensuring data consistency in such distributed systems. In this blog, we’ll discuss: What is the Saga Pattern? Types of Saga Patterns : Orchestration vs. Choreography How to Choose Between Them Implementing Orchestration-Based Saga with Spring Boot An Approach to Event-Driven Saga with Kafka 1. What is the Saga Pattern? The Saga Pattern breaks a long-running distributed transaction into a series of smaller atomic transactions , each managed by a microservice. If any step fails, compensating actions are performed to roll back the preceding operations. Example: In an e-commerce system , a customer places an order: Payment is processed. Inventory is reserved. Shipping is scheduled. If inventory reservation fails, the paym...