Author: gcl

  • A beginner’s introduction to Kafka and how to use it in Spring Boot

    The aim of this article is to provide a concise introduction to Kafka’s core concepts. We will briefly explain how Kafka works and how it can be integrated into Spring Boot applications.

    To make the whole thing more practical,  I have prepared a very simplified example application based on Spring Boot. So you can follow every step hands-on and test it directly yourself.

    We will start by covering a high level unterstanding of Kafka to build a solid foundation. This introduction is largely based on the official Kafka documentation.

    What is Kafka?

    As the official documentation states: “Apache Kafka is an open-source distributed event streaming platform. It is designed to handle high-throughput, low-latency data pipelines. Companies use Kafka to process real-time data, build event-driven systems, and connect various applications reliably.”

    What is event streaming?

    Event streaming is the practice of capturing data in real-time as it is produced (“events”) and processing or reacting to it immediately. This is useful for things like:

    • Logging user interactions on a website
    • Processing sensor data
    • Financial transactions
    • System monitoring

    While Kafka enables real-time processing, it also allows persistent storage of events for long-term use. Events can be retained indefinitely depending on your configuration.

    How Kafka Works

    Kafka is based on a publish-subscribe model. Here is a breakdown of the main components:

    Topic

    A topic is a category or feed name to which records are published. Think of it as a logical container that groups related messages.

    Producer

    A sends (publishes) records (events) to a topic.

    Consumer

    A consumer subscribes to one or more topics and processes the records published to them. Typically, multiple consumers can work in parallel by forming a consumer group.

    Partitions

    Partitions divide a topic’s events into logical segments. When events are produced, Kafka uses a key (if provided) to determine which partition the event will be written to. Partitions enable parallel processing of topic data. This means Kafka can distribute the workload across multiple consumers. For example, in a consumer group, each consumer can be assigned one or more partitions. Kafka also guarantees that events within a single partition are always delivered in the exact order in which they were produced—a key feature when event order matters. To learn more about delivery guarantees, refer to the Kafka documentation on Message Delivery Semantics.

    Broker

    A Broker is a Kafka server that stores data and handles client requests. Typically, the data is stored and replicated across multiple brokers.

    We now have the basic understanding to get started with the setup. However, it’s important to note that Kafka is a powerful and complex system, and what we’re covering here is just a beginner-friendly introduction to help you take your first steps.

    Setup

    Next up, we will look at how to set up Kafka using Docker and how to build a simple Spring Boot application that consumes and processes Kafka events.

    Docker Kafka and Zockeper setup

    To run Kafka locally, we use Docker Compose to start both Zookeeper and Kafka as containers. Kafka relies on Zookeeper (or KRaft) to manage broker metadata and cluster coordination. The Spring Boot application is configured to support docker compose. This means that when you start the application, Kafka and Zookeeper are automatically started as well.

    Note: This is a single-node Kafka setup, suitable for local development and testing purposes. In production, you had use multiple brokers and configure replication and fault tolerance.

    services:
      zookeeper:
        image: confluentinc/cp-zookeeper:latest
        environment:
          ZOOKEEPER_CLIENT_PORT: 2181
          ZOOKEEPER_TICK_TIME: 2000
        ports:
          - "2181:2181"
    
      kafka:
        image: confluentinc/cp-kafka:latest
        depends_on:
          - zookeeper
        ports:
          - "9092:9092"
        environment:
          KAFKA_BROKER_ID: 1
          KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
          KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
          KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT
          KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1

    Spring Boot Setup

    Now let’s configure Kafka in our Spring Boot application. Here is a minimal application.yml configuration:

    spring:
      kafka:
        bootstrap-servers: localhost:9092
        consumer:
          auto-offset-reset: earliest
          key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
          value-deserializer: org.apache.kafka.common.serialization.StringDeserializer

    This tells Spring Boot where Kafka is running and how to deserialize the messages for the consumer.

    Topic

    We’ll define our topic directly in the Spring Boot application using a configuration class.

    @Configuration
    public class KafkaConfig {
      @Bean
      public NewTopic userActivityTopic() {
        return TopicBuilder.name("user-activity")
            .partitions(2)
            .build();
      }
    }

    This creates a topic named user-activity with 2 partitions. 

    Event Structure

    Here is the Java model representing the event structure our consumers will work with:

    class ActivityEvent {
      private String userId;
      private String type;
      private String target;
      private String timestamp;
    
      // Getters and setters omitted for brevity
    }

    This will be the format that our Kafka consumer expects in each message.

    Consumer

    To demonstrate Kafka’s partition-based load balancing, we create two Spring Boot applications running on different ports, each containing a consumer with the same groupId. Kafka will automatically assign different partitions to each instance.

    @Service
    public class ActivityConsumer1 {
      private final ObjectMapper objectMapper = new ObjectMapper();
    
      @KafkaListener(topics = "user-activity", groupId = "activity-consumer")
      public void listen(ConsumerRecord<String, String> record) {
        try {
          String json = record.value();
          ActivityEvent event = objectMapper.readValue(json, ActivityEvent.class);
          System.out.printf("Consumer1 - Partition %d: %s - %s%n",
              record.partition(), event.getUserId(), event.getType());
        } catch (Exception e) {
          System.err.println("Consumer1 - Failed to parse event: " + e.getMessage());
        }
      }
    }
    @Service
    public class ActivityConsumer2 {
      private final ObjectMapper objectMapper = new ObjectMapper();
    
      @KafkaListener(topics = "user-activity", groupId = "activity-consumer")
      public void listen(ConsumerRecord<String, String> record) {
        try {
          String json = record.value();
          ActivityEvent event = objectMapper.readValue(json, ActivityEvent.class);
          System.out.printf("Consumer2 - Partition %d: %s - %s%n",
              record.partition(), event.getUserId(), event.getType());
        } catch (Exception e) {
          System.err.println("Consumer2 - Failed to parse event: " + e.getMessage());
        }
      }
    }

    Kafka ensures that each partition is only assigned to one consumer in a given consumer group at a time. So, with two consumers and two partitions, each will process a different stream of messages.

    Producing Messages to Kafka from the CLI

    After entering the Kafka container (docker exec -it … bash), you can use the Kafka console producer to send messages with keys, which will influence how Kafka routes them to partitions.

    • Start the Kafka console producer with key parsing enabled:
    kafka-console-producer \
      --broker-list localhost:9092 \
      --topic user-activity \
      --property "parse.key=true" \
      --property "key.separator=:" 
    • Send messages with different keys
    u1:{"userId":"u1", "type":"click", "target":"button", "timestamp":"2024-01-01T12:00:00"}
    Z:{"userId":"Z", "type":"view", "target":"page", "timestamp":"2025-01-01T12:01:00"}

    Kafka uses the hash of the key to determine the partition:

    • Messages with key u1 may go to partition 0
    • Messages with key Z may go to partition 1

    This demonstrates a part of Kafka’s partitioning capability.

    Kafka Streams

    So far, we’ve covered Kafka brokers and basic clients (producers and consumers). But Kafka has two additional core components:

    • Kafka Connect: A tool for importing and exporting data between Kafka and external systems (e.g., databases, file systems, Elasticsearch) using prebuilt or custom connectors. For more details, refer to the Kafka Connect documentation.
    • Kafka Streams: A lightweight Java library for processing and transforming data streams directly within your application.

    What is Kafka Streams?

    As stated in the official documentation:

    “The Streams API allows transforming streams of data from input topics to output topics.”

    Kafka Streams enables you to filter, map, aggregate, and route messages — all directly within your application — without the need for external stream processing platforms.

    Example: Kafka Streams – Spring Boot

    The following example demonstrates how to set up a Kafka Streams pipeline in a Spring Boot application. It consumes messages from the user-activity topic, filters out only those with the type “LIKE”, and forwards them to a new topic named likes-only.

    @Bean
    public KStream<String, String> kStream(StreamsBuilder builder) {
      ObjectMapper mapper = new ObjectMapper();
    
      KStream<String, String> stream = builder.stream("user-activity");
    
      stream
          .peek((key, value) -> System.out.printf("Received message: %s%n", value))
          .filter((key, value) -> {
            try {
              JsonNode node = mapper.readTree(value);
              return "LIKE".equalsIgnoreCase(node.get("type").asText());
            } catch (Exception e) {
              System.err.printf("Failed to parse event: %s%n", e.getMessage());
              return false;
            }
          })
          .peek((key, value) -> System.out.printf("Filtered LIKE event: %s%n", value))
          .to("likes-only");
    
      return stream;
    }
    

    If you now send the following event

    u1:{"userId":"u1", "type":"LIKE", "target":"button", "timestamp":"2024-01-01T12:00:00"}

    you will see the following log output

    Configuration

    Before we can process Kafka messages using the Kafka Streams API, we need to configure the stream application. The following configuration is done in Spring Boot using a @Bean that returns a KafkaStreamsConfiguration. (For detailed information use the official Spring documentation)

    @Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
    public KafkaStreamsConfiguration kStreamsConfigs() {
      Map<String, Object> props = new HashMap<>();
      props.put(StreamsConfig.APPLICATION_ID_CONFIG, "activity-stream-app");
      props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
      props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
      props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
      props.put(StreamsConfig.DEFAULT_TIMESTAMP_EXTRACTOR_CLASS_CONFIG, WallclockTimestampExtractor.class.getName());
      props.put(StreamsConfig.consumerPrefix("auto.offset.reset"), "earliest");
      return new KafkaStreamsConfiguration(props);
    }

    Let’s break this configuration down:

    PropertyDescription
    APPLICATION_ID_CONFIGUnique identifier for your Kafka Streams application. It is used to group state stores and consumer groups.
    BOOTSTRAP_SERVERS_CONFIGSpecifies the Kafka broker to connect to (in this case, localhost:9092).
    DEFAULT_KEY_SERDE_CLASS_CONFIGTells Kafka how to (de)serialize the key of each record. Here, we use StringSerde for string keys.
    DEFAULT_VALUE_SERDE_CLASS_CONFIGSame as above, but for the value part of the message.
    DEFAULT_TIMESTAMP_EXTRACTOR_CLASS_CONFIGDefines how timestamps are extracted from messages.
    auto.offset.reset (via consumerPrefix)Tells the stream to start reading from the earliest available offset if there is no committed offset yet. This ensures we don’t miss any messages on the first run.

    Summary

    In this blog post, we explored how to set up Kafka locally with Docker and integrate it into a Spring Boot application. We configured a topic with multiple partitions, built two consumers using the same group ID to demonstrate Kafka’s partition-based load balancing, and used message keys to control partition routing. Additionally, we introduced Kafka Streams to filter and forward specific events in real time.

  • Securing a Spring Boot REST API with OAuth 2.0 Bearer Tokens

    In this post, we’ll learn how to configure a Spring Boot application so that it uses OAuth 2.0 Bearer Tokens for authentication and authorization – powered by Spring’s Resource Server support. By the end, you’ll be able to protect any REST endpoint with JWT-based security and custom role mappings.

    To make the whole thing more practical,  I have prepared a very simplified example application based on Spring Boot. So you can follow every step of the series hands-on and test it directly yourself.

    Before we begin, let’s make sure you have everything in place.

    Prerequisites

    • A running OAuth 2.0 Authorization Server (e.g. Keycloak) that can issue access tokens.
    • A basic Spring Boot application with at least one REST controller. (You can use my sample one)
    • Maven or Gradle build configured for Spring Security and the Resource Server starter. (See here for an example)

    High-Level Flow

    1. User obtains an OAuth 2.0 access token from the Authorization Server.
    2. User calls your REST API, sending the access token in the Authorization: Bearer header.
    3. Spring Resource Server validates the token, authenticates the user, and establishes authorities.
    4. Your API returns the requested resource if the user is authorized.

    Now, let’s dive into each step in detail.

    1. Requesting an OAuth 2.0 Access Token

    In this example we’ll use Keycloak with the Resource Owner Password Credentials flow (for demo purposes only—use a more secure flow in production).

    Token Request

    Sample Token Response

    {
      "access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
      "expires_in": 300,
      "refresh_token": "eyJhbGciOiJIUzI1NiJ9...",
      "token_type": "Bearer",
      "id_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...",
      "scope": "openid email profile"
    }

    With the raw token in hand, we can inspect its contents to understand what claims it carries.

    Decoding the JWT payload reveals:

    {
      "iss": "https://.../realms/TestLocoVote",
      "sub": "...",
      "resource_access": {
        "loco-vote-test": {
          "roles": ["creator", "admin", "user"]
        }
      },
      ...,
      "scope": "openid email profile",
      "preferred_username": "giuseppe.clinaz",
      "email": "giuseppe...@...com"
    }

    Key claims

    • iss (Issuer URI) tells us who issued this token. This will be in the next section relevant.
    • resource_access lists the roles granted for a specific client (in our case, loco-vote-test). The token indicates that the user has the creator, admin and user roles for this resource. The logic used here is sufficient for our application, depending on your requirements and authorization server, you may use different claims.

    2. Configuring Spring Boot as an OAuth 2.0 Resource Server

    Before sending any API requests with your access token, you must configure your Spring Boot application as a Resource Server. Let’s start by adding the necessary properties.

    2.1 Application Properties

    Add the following to your application.yml (Have a look here for a list of configuration options):

    spring:
      security:
        oauth2:
          resourceserver:
            jwt:
              issuer-uri:   ${ISSUER_URI:https://.../realms/TestLocoVote}
              jwk-set-uri:  ${JWK_SET_URI:https://.../realms/TestLocoVote/protocol/openid-connect/certs}
      jwt:
        auth:
          converter:
            resource-id: loco-vote-test

    As in the documentation stated the issuer-uri can either be an OpenID Connect discovery endpoint or an OAuth 2.0 Authorization Server Metadata endpoint defined by RFC 8414. To validate JWT´s signature Spring uses the issuer-uri to perform the OpenID Connect Discovery call. In this example the url would be https://.../realms/TestLocoVote/.well-known/openid-configuration. This endpoint should return a JSON containing among other things the jwk-set-uri. Because my jwk-set-uri endpoint is located behind a proxy, I set the URI manually.

    2.2 Security Filter Chain

    Next, define a SecurityConfig to enforce JWT authentication:

    @Configuration
    @EnableMethodSecurity
    public class SecurityConfig {
    
      private final KeycloakAuthoritiesConverter converter;
    
      public SecurityConfig(KeycloakAuthoritiesConverter converter) {
        this.converter = converter;
      }
    
      @Bean
      public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
        http
            .authorizeHttpRequests(authorizeRequests -> authorizeRequests.anyRequest().authenticated())
            .oauth2ResourceServer(oauth2 -> oauth2.jwt(
                jwt -> jwt.jwtAuthenticationConverter(keycloakAuthoritiesConverter))
            )
            .sessionManagement(
                session -> session.sessionCreationPolicy(SessionCreationPolicy.STATELESS)
            );
    
        return http.build();
      }
    }

    This configuration requires every request to be authenticated with a valid JWT.
    Via the oauth2ResourceServer, we plug in a custom converter (KeycloakAuthoritiesConverter) that controls how roles map to Spring authorities. Finally, the SessionCreationPolicy.STATELESS setting tells Spring not to create or store any HTTP session.

    3. Converting JWT Claims into Spring Authorities

    This KeycloakAuthoritiesConverter is the heart of our custom role mapping logic:

    @Component
    public class KeycloakAuthoritiesConverter
        implements Converter<Jwt, JwtAuthenticationToken> {
    
      private final String resourceId;
    
      public KeycloakAuthoritiesConverter(
          @Value("${spring.jwt.auth.converter.resource-id}") String resourceId) {
        this.resourceId = resourceId;
      }
    
      @Override
      public JwtAuthenticationToken convert(Jwt jwt) {
        Collection<GrantedAuthority> authorities = extractAuthorities(jwt);
        String principalName = jwt.getIssuer() + "+" + jwt.getSubject();
        return new JwtAuthenticationToken(jwt, authorities, principalName);
      }
    
      @SuppressWarnings("unchecked")
      private Collection<GrantedAuthority> extractAuthorities(Jwt jwt) {
        Map<String, Object> resourceAccess = jwt.getClaimAsMap("resource_access");
        if (resourceAccess == null) return Collections.emptySet();
    
        Object resource = resourceAccess.get(resourceId);
        if (!(resource instanceof Map)) return Collections.emptySet();
    
        Map<String, Object> resourceMap = (Map<String, Object>) resource;
        Object rolesObj = resourceMap.get("roles");
        if (!(rolesObj instanceof Collection)) return Collections.emptySet();
    
        return ((Collection<String>) rolesObj).stream()
          .map(String::toUpperCase)
          .map(role -> "ROLE_" + role)
          .map(SimpleGrantedAuthority::new)
          .collect(Collectors.toSet());
      }
    }

    With this converter, any roles found under resource_access -> <resource-id> -> roles are mapped to Spring GrantedAuthority instances, ready for method-level security checks. Again, this behavior depends on your requirements and can be customized as needed.

    4. Testing Your Secure API

    Let’s verify that our setup works as expected. Consider this simple controller:

    @RestController
    public class HelloWorldController {
    
      @GetMapping("/hello")
      public String hello() {
        return "Hello World";
      }
    }

    4.1 Unauthorized Request

    curl -i http://localhost:9090/hello
    HTTP/1.1 401 Unauthorized
    WWW-Authenticate: Bearer

    In the response, the WWW-Authenticate: Bearer header indicates that the API expects a Bearer token in the Authorization header. To gain a detailed understanding of all filter layers in the SecurityFilterChain the documentation can be found here.

    4.2 Authorized Request

    1. Obtain an access token from Keycloak.
    2. Call the API:
    curl -H "Authorization: Bearer <access_token>" http://localhost:9090/hello
    Hello World

    Succes, your API now correctly validates tokens and returns the resource only when authorized.

    5. Restricting Endpoints by Role

    Fine-grained access control is available right out of the box. For example, you can restrict an endpoint so that only users with the admin role can invoke it. By injecting a JwtAuthenticationToken into your controller or service, you gain direct access to all token claims (username, roles, etc.) within your business logic.

    @RestController
    public class HelloWorldController {
    
      @PreAuthorize("hasAuthority('ROLE_ADMIN')")
      @GetMapping("/hello")
      public String hello(JwtAuthenticationToken token) {
        return "Hello World Admin: " + token.getName();
      }
    }

    Conclusion

    By turning your Spring Boot app into an OAuth 2.0 Resource Server, you offload authentication and token management to a dedicated Authorization Server (like Keycloak) and keep your REST endpoints stateless and secure. With just a handful of properties and a custom JwtAuthenticationConverter, you can map JWT claims to Spring Security authorities, enforce role-based access.

  • Docker Compose Integration in Spring Boot

    In this post, I’ll introduce you to Docker Compose support for Spring Boot applications. This allows you to deliver locally your repository layer using a simple Docker Compose file.

    To make the whole thing more practical,  I have prepared a very simplified example application. So you can follow every step hands-on and test it directly yourself.

    Project Setup Using Spring Initializr

    To quickly create an initial project, https://start.spring.io/ is a great tool. The Spring Initializr lets you search for dependencies and provides a basic project structure, which can then be downloaded as a ZIP file.

    Required Dependencies

    For our example project, we need the following dependencies:

    • Docker Compose support
    • PostgreSQL driver

    I’m using PostgreSQL for this project, so I need the corresponding driver. If you choose another database, you’ll need to select the appropriate driver accordingly.

    Clicking the “Explore” button shows the generated project structure.

    The compose.yaml File

    In the root directory, you’ll find the compose.yaml file. It already contains the configuration needed to launch, for example, the PostgreSQL database. Spring Initializr automatically detected the need for this configuration based on the selected driver and set it up accordingly. Of course, you can customize it — for example, change the image tag, environment variables, or port settings.

    Configuration via Application Properties

    If you want to change the name or location of the file, you can do this using the spring.docker.compose.file property. (You can find more Properites in the official documentation.)

    Starting the Application

    Before we can start the Spring Boot application, we need to ensure that Docker is installed and running on the host system.

    Application Output and Container Status

    After starting the application, we can see from the logs that the Docker Compose file is being used:

    Using Docker Compose file C:\Users\Admin\IdeaProjects\example.compose\compose\dev\docker-compose.yaml

    We also see that the PostgreSQL container was successfully created, started, and marked as healthy:

    INFO  DockerCli   :  Container dev-postgres-1  Created  
    INFO  DockerCli   :  Container dev-postgres-1  Starting  
    INFO  DockerCli   :  Container dev-postgres-1  Started  
    INFO  DockerCli   :  Container dev-postgres-1  Waiting  
    INFO  DockerCli   :  Container dev-postgres-1  Healthy

    Verifying with Docker CLI

    Running “docker ps” confirms that the container is running and exposed on a specific port.

  • Introduction to Spring Boot Application Testing for Beginners – A Practical Guide (Part 3: Repository-Layer)

    Welcome to the third and final part of the blog series. Although Part 1 and Part 2 are not essential for understanding this article, they provide a solid foundation and practical examples for testing the web and service layers.

    To make the whole thing more practical,  I have prepared a very simplified example application based on Spring Boot. So you can follow every step of the series hands-on and test it directly yourself.

    As explained in the previous articles, we use slice tests specifically to test isolated layers of our application. They do not replace integration tests, but complement them in a meaningful way by expanding our test coverage in a targeted and efficient manner.

    Focus of this section: Testing the repository layer

    The repository class forms the interface between the Spring Boot application and the underlying database – and is therefore an essential component of the business logic. For this reason, this layer should also be secured by tests.

    A central example in this article is the findAllByTitle method. This returns all posts paginated that:

    • contain the transferred search term in the title (case insensitive)
    • were created within a defined LocalDateTime window

    The method uses the annotation @Query to define its own JPQL query. Alternatively, we could also let Spring Data JPA generate queries from method names independently – see the official documentation. As we return a page, a countQuery must also be specified. This counts the total hits for the pagination – otherwise Spring would not be able to calculate the page size correctly.

    Note: The JPA repository is completely sufficient for simple filter and search operations. For more complex queries or full-text searches, the use of external tools such as OpenSearch would be my choice.

    It should also be noted that JpaRepository is not used directly here, but rather BaseJpaRepository from the Hypersistence Utils Library. This repository interface was developed to avoid so-called repository anti-pattern. You can find out more about this in Vlad Mihalcea’s blog post and his Github Repository.

    @Repository
    public interface PostRepository extends BaseJpaRepository<Post, UUID>,
        ListPagingAndSortingRepository<Post, UUID> {
    
      @Query(value = """
          SELECT p
          FROM Post p
          WHERE (LOWER(p.title) LIKE LOWER(CONCAT('%', :title, '%')))
          AND (p.createdAt >= :from)
          AND (p.createdAt <= :to)
          ORDER BY p.createdAt DESC
          """,
          countQuery = """
              SELECT COUNT(p)
              FROM Post p
              WHERE (LOWER(p.title) LIKE LOWER(CONCAT('%', :title, '%')))
              AND (p.createdAt >= :from)
              AND (p.createdAt <= :to)
              """)
      Page<Post> findAllByTitle(
          @Param("title") String title,
          @Param("from") LocalDateTime from,
          @Param("to") LocalDateTime to,
          Pageable pageable);
    
    // ... other Code ...
    }

    Repository tests: What to look out for?

    So how do we test this method in a meaningful and practical way?

    Here are a few basic recommendations:

    1. Do not test directly against production environments – not even against QA or Dev (in the first run).
    2. Tests should be executable locally – ideally even independent of the network connection or infrastructure.
    3. Minimise dependencies – the tests should be easily reproducible on every developer computer.

    Our solution of choice ist Testcontainers. Testcontainers offers a way to run tests with real, containerised databases. Instead of accessing external database instances, we start a Docker instance with a defined database image for each test run. The big advantage: the database is always consistent and independent of the host system. This means that the test environment always remains the same – reproducible, stable and quickly ready for use.

    Structure of the test class

    Before we create our test class, we need a small preparatory measure in the test directory. We need to define our own start class (TestMain) so that Testcontainers works correctly in combination with Spring Boot:

    public class TestMain {
      public static void main(String[] args) {
           SpringApplication
              .from(Main::main)
              .with(TestcontainersConfiguration.class)
              .run(args);
      }
    }

    Annotations of the test class

    In the following, I will explain the most important annotations that our test class requires:

    • @DataJpaTest: This annotation defines the slice test for database access. It does not load the entire Spring Context, but only the components relevant for JPA.
    • @Testcontainers: Activates the use of test containers in the test. Docker containers can thus be started automatically when the test is started.
    • @AutoConfigureTestDatabase(replace = Replace.NONE): @DataJpaTest implicitly includes an auto configuration for in-memory databases. However, since we use an external database via test containers, this annotation prevents Spring from forcing its own (non-configured) data source – otherwise an error message would be displayed.
    • @EnableJpaRepositories: This annotation is only necessary if – as in my case – you are not using the standard JpaRepository but, for example, the BaseJpaRepository from Hypersistence Utils. The repository configuration is not loaded automatically in slice tests, so it must be specified manually.
    • @TestInstance(TestInstance.Lifecycle.PER_CLASS): Enables the use of non-static methods with @BeforeAll, as already explained in the second part of the series.
    @DataJpaTest
    @Testcontainers
    @AutoConfigureTestDatabase(replace = Replace.NONE)
    @EnableJpaRepositories(
        value = "net.fungiloid*",
        repositoryBaseClass = io.hypersistence.utils.spring.repository.BaseJpaRepositoryImpl.class
    )
    @TestInstance(TestInstance.Lifecycle.PER_CLASS)
    class PostServiceTest {
      @Autowired
      PostRepository postRepository;
    
      @Autowired
      UserRepository userRepository;
      ...
    }

    Since @DataJpaTest loads part of the Spring context, we can simply inject our repositories via @Autowired. Spring recognises these and provides them automatically in the test context.

    Database container with Testcontainer

      @Container
      @ServiceConnection
      static PostgreSQLContainer<?> postgres = new PostgreSQLContainer<>("postgres:16");
    • @Container: Identifies the resource as a test container that is automatically started at the beginning of the test and shut down again at the end.
    • @ServiceConnection: This annotation ensures the automatic connection of the Spring Boot Application Context with the database in the container.

    Optionally, connection properties can also be set manually – the annotation @DynamicPropertySource can be used for this. An example of this can be found in the official test container documentation. (Equally helpful: the modules overview contains many practical examples for various technologies)

    Test the connection

    @DataJpaTest
    @Testcontainers
    @AutoConfigureTestDatabase(replace = Replace.NONE)
    @EnableJpaRepositories(
        value = "net.fungiloid*",
        repositoryBaseClass = io.hypersistence.utils.spring.repository.BaseJpaRepositoryImpl.class
    )
    @TestInstance(TestInstance.Lifecycle.PER_CLASS)
    class PostRepositoryTest {
    
      @Autowired
      PostRepository postRepository;
    
      @Autowired
      UserRepository userRepository;
    
      @Container
      @ServiceConnection
      static PostgreSQLContainer<?> postgres = new PostgreSQLContainer<>("postgres:16");
    
      @Test
      void connectionEstablished() {
        assertThat(postgres.isCreated()).isTrue();
      }
    }

    Setup: Preparation of the test data

    Before we turn our attention to the actual test cases, let’s take a look at the setup – the preparatory steps that ensure that each test runs under consistent conditions.

    1. creation of a dummy user

    As the post-model expects a user as author, we create a dummy user once before starting all tests. This is done in the method annotated with @BeforeAll:

    @BeforeAll
    void setup() {
      user = userRepository.persist(
          new User()
              .setDisplayName("test-user")
              .setKey("test-key")
              .setEmail("test.hall@gmx.de")
              .setFirstName("firstname")
              .setLastName("lastname"));
    }

    This user serves as the creator of all posts that are used in the tests.

    1. cleaning up and creating new test data before each test

    To ensure that all tests run independently of each other and always start with the same initial data, we delete existing posts at the start of each test and create new ones:

    @BeforeEach
    void clearPosts() {
        postRepository.deleteAllByIdInBatch(postIds);
        postIds = populateDateRangePosts(LocalDateTime.now());
        entityManager.flush();
    }

    Insert: Dealing with the createdAt timestamp

    The createdAt field in the post entity stores the creation date of a post and is provided with the annotation @CreationTimestamp:

    public class Post implements Taggable, Categorizable {
      // ... more code
      
      @CreationTimestamp
      @Column(name = "created_at", nullable = false, updatable = false)
      private LocalDateTime createdAt;
      
      // ... more code
      
    }

    This annotation ensures that the current timestamp is automatically set when a new Post object is persisted. However, this is a hindrance for our tests, as we want to specifically check whether the repository method filters correctly according to time periods. To do this, we need defined, controllable values for createdAt.

    To tackle this problem, we set the timestamp directly in the database using a native query – despite the fact that the field is actually unchangeable (updatable = false). In this way, we can ensure that every test works with the same time-stamped data.

    private List<UUID> populateDateRangePosts(LocalDateTime now) {
        Post post1 = new Post();
        post1.setTitle("First Post");
        post1.setCreator(user);
        post1 = postRepository.persist(post1);
        updateCreatedAt(post1, now.minusDays(3));
        
        Post post2 = new Post();
        post2.setTitle("Second Post");
        post2.setCreator(user);
        post2 = postRepository.persist(post2);
        updateCreatedAt(post2, now.minusDays(2));
        
        Post post3 = new Post();
        post3.setTitle("Third Post");
        post3.setCreator(user);
        post3 = postRepository.persist(post3);
        updateCreatedAt(post3, now.minusDays(1));
        
        return List.of(post1.getId(), post2.getId(), post3.getId());
    }
    
    private void updateCreatedAt(Post post, LocalDateTime newCreatedAt) {
        entityManager.createNativeQuery("UPDATE post SET created_at = ?1 WHERE id = ?2")
            .setParameter(1, newCreatedAt)
            .setParameter(2, post.getId())
            .executeUpdate();
        entityManager.flush();
        entityManager.clear();
    }

    The test class therefore looks like this:

    @DataJpaTest
    @Testcontainers
    @AutoConfigureTestDatabase(replace = Replace.NONE)
    @EnableJpaRepositories(
        value = "net.fungiloid*",
        repositoryBaseClass = io.hypersistence.utils.spring.repository.BaseJpaRepositoryImpl.class
    )
    @TestInstance(TestInstance.Lifecycle.PER_CLASS)
    public class PostRepositoryTest {
      @Autowired
      PostRepository postRepository;
      @Autowired
      UserRepository userRepository;
      @Container
      @ServiceConnection
      static PostgreSQLContainer<?> postgres = new PostgreSQLContainer<>("postgres:16");
    
      @Autowired
      EntityManager entityManager;
    
      User user;
      List<UUID> postIds = List.of();
    
      @BeforeAll
      void setup() {
        user = userRepository.persist(
            new User()
                .setDisplayName("test-user")
                .setKey("test-key")
                .setEmail("test.hall@gmx.de")
                .setFirstName("firstname")
                .setLastName("lastname"));
      }
    
      @BeforeEach
      void clearPosts() {
        postRepository.deleteAllByIdInBatch(postIds);
        postIds = populateDateRangePosts(LocalDateTime.now());
        entityManager.flush();
      }
    
      @Test
      void connectionEstablished() {
        assertThat(postgres.isCreated()).isTrue();
      }  
    
      private List<UUID> populateDateRangePosts(LocalDateTime now) { ... }
    
      private void updateCreatedAt(Post post, LocalDateTime newCreatedAt) { ... }
    }

    Testcases

    Once we have completed the configuration for slice tests and test containers, we can now specifically test the repository logic – specifically the findAllByTitle method, which filters by title and also takes a time period into account.

    Goal of the test

    The repository method should:

    • return posts whose titles contain a specific string (case insensitive),
    • only consider results in the specified time period,
    • paginate the results and sort by createdAt descending order.

    Test case: Filtering by title

    This test checks whether the repository method filters correctly for a specific title – regardless of capitalisation. As only one post has exactly this title, we expect exactly one result. (getEffectiveDateRange(null, null) returns an array of LocalDateTime ranging from 1960 to now)

    @Test
    @DisplayName("Filter posts by title: Only matching posts are returned")
    void shouldReturnPost_WhenTitleMatchesExactly() {
        Pageable pageable = PageRequest.of(0, 10);
        Page<Post> result = postRepository.findAllByTitle(
            "First Post",
            getEffectiveDateRange(null, null)[0],
            getEffectiveDateRange(null, null)[1],
            pageable);
        
        assertThat(result.getTotalElements()).isEqualTo(1);
        Post found = result.getContent().get(0);
        assertThat(found.getTitle()).containsIgnoringCase("First Post");
    }

    Test case: Combination of title & time filter with sorting

    Here we test:

    • Whether only posts in the defined time window (2 days back to today) are taken into account.
    • Whether the sorting according to createdAt DESC works correctly.

    Two hits are expected (“Third Post” and “Second Post”), with “Third Post” being the most recent.

    @Test
    @DisplayName("Filter posts by title and date range: Correct posts are returned in sorted order")
    void shouldReturnPostsInCorrectOrder_WhenFilteringByTitleAndDateRange() {
        LocalDateTime now = LocalDateTime.now();
        Pageable pageable = PageRequest.of(0, 10);
        LocalDateTime from = now.minusDays(2).toLocalDate().atStartOfDay();
        Page<Post> result = postRepository.findAllByTitle("Post", from, now, pageable);
        
        assertThat(result.getTotalElements()).isEqualTo(2);
        List<Post> posts = result.getContent();
        assertThat(posts.get(0).getCreatedAt()).isAfter(posts.get(1).getCreatedAt());
        assertThat(posts.get(0).getTitle()).isEqualTo("Third Post");
        assertThat(posts.get(1).getTitle()).isEqualTo("Second Post");
    }

    Test case: No hit due to unsuitable time period

    In this case, the specified time period is outside the creation time of all existing posts.

    @Test
    @DisplayName("No posts are found due to non-matching date range")
    void shouldReturnNoPosts_WhenDateRangeDoesNotMatchAnyPost() {
        LocalDateTime now = LocalDateTime.now();
        Pageable pageable = PageRequest.of(0, 10);
        LocalDateTime from = now.minusDays(5).toLocalDate().atStartOfDay();
        LocalDateTime to = now.minusDays(4).toLocalDate().atStartOfDay();
        Page<Post> result = postRepository.findAllByTitle("Post", from, to, pageable);
        
        assertThat(result.getTotalElements()).isEqualTo(0);
    }

    Testcase: No hit due to invalid title

    In this case, the specified time period is outside the creation time of all existing posts.

    @Test
    @DisplayName("No posts are found due to invalid title filter")
    void shouldReturnNoPosts_WhenTitleDoesNotMatchAnyPost() {
        LocalDateTime now = LocalDateTime.now();
        Pageable pageable = PageRequest.of(0, 10);
        LocalDateTime from = now.minusDays(2).toLocalDate().atStartOfDay();
        Page<Post> result = postRepository.findAllByTitle("NoValidPostTitle", from, now, pageable);
        
        assertThat(result.getTotalElements()).isEqualTo(0);
    }

    Conclusion

    These tests demonstrate how slice tests and test containers can be used to specifically validate the repository layer – with a high level of control over the test data, complete isolation and a reproducible environment.

    The advantages:

    • Clear delimitation of the tested layer
    • No overhead due to the complete Spring Context
    • Realistic database environment due to test containers
  • Introduction to Spring Boot Application Testing for Beginners – A Practical Guide (Part 2: Service Layer)

    This is the second part of the three-part blog series. The first part is not essential for understanding this post, but is highly recommended as it provides a better overall understanding, practical examples of web layer testing and a brief overview of the application architecture.

    To make the whole thing more practical,  I have prepared a very simplified example application based on Spring Boot. So you can follow every step of the series hands-on and test it directly yourself.

    As discussed there, we focus on slice tests to specifically test individual layers of our architecture. These do not replace integration tests – rather, they extend our test coverage in a targeted and efficient way.


    Focus of this section: Service layer test

    The service layer forms the centrepiece of an application’s business logic. To test it specifically, let’s take a look at the update() method, which can be used to update an existing post.

    The update() method has three parameters:

    • UUID id – identifies the post that is to be updated
    • UpdatePostDTO – holds the new data for the post
    • JwtAuthenticationToken – stands for the currently authenticated use

    Functionality of the update() method:

    • The author of the post is determined using the ID of the post
    • It is checked whether the current user is either the author himself or an admin
    • The post is overwritten with the new data if the authorisation check was successful
    @Transactional
    @Service
    class PostService {
    // PostService.java
    
    // ... other Imports und Code ...
    @Autowired
    UserService userService;
    @Autowired
    PostRepository postRepository;
    @Autowired
    Utils utils;
        
        public Either<ErrorJson, Post> update(
                UUID id,
                UpdatePostDTO updatePostDTO,
                JwtAuthenticationToken jwtAuthenticationToken) {
            return utils.wrapCall(
                            () -> postRepository.findCreatorByPostId(id).orElseThrow(),
                            new ErrorNotFoundInDB("post", updatePostDTO.getId())
                    )
                    .flatMap(creator -> userService.getAuthorizedCreator(
                            jwtAuthenticationToken, creator.getId()
                    ))
                    .flatMap(user -> utils.wrapCall(
                            () -> postRepository.update(updateDtoToModel(
                                    id,
                                    updatePostDTO,
                                    user)),
                            new ErrorUnableToSaveToDB("post")
                    ));
        }
    
    // ... other Code ...
    }

    Structure of the test class

    Before we dive into the individual test cases, let’s clarify the basic concept of our test class.

    We follow an architecture-oriented slice test approach in which each application layer is tested separately. In this section, we focus on isolated unit tests of the service layer. Integration tests will follow in a later part.

    In contrast to the web layer test from part 1, here we test completely independently of Spring Boot-specific components. Why is this possible? Because the service layer generally does not require any in-depth Spring abstractions. Although we use @Autowired, we can easily mock the dependencies with Mockito.

    Mockito-Integration mit @ExtendWith

    @ExtendWith(MockitoExtension.class)
    class PostServiceTest {
      ...
    }

    This allows us to add @Mock annotation to fields (instead of mocking manually) and @InjectMocks ensures that Mockito automatically injects the dependencies. It should be noted that, depending on your Spring Boot version, the syntax.

    @ExtendWith(MockitoExtension.class)
    class PostServiceTest {
        @Mock
        private UserService userService;
        @Mock
        private Utils utils;
        @InjectMocks
        private PostService postService;
    }

    Preparation of the test with lifecycle annotations

    In our tests, we need a JwtAuthenticationToken, among other things. As this is not part of the logic to be tested, we can simply mock it. We use @BeforeAll in combination with mock() for this. (It should be noted here that the JWT could also be mocked using @Mock. The Lifecylce method is used to display various functionalities.)

    @TestInstance(TestInstance.Lifecycle.PER_CLASS)
    @ExtendWith(MockitoExtension.class)
    class PostServiceTest {
        @Mock
        private UserService userService;
        @Mock
        private Utils utils;
        @InjectMocks
        private PostService postService;
        
        private JwtAuthenticationToken token;
        
        @BeforeAll
        void setup() {
            token = mock(JwtAuthenticationToken.class);
        }
    }

    Normally, the method annotated with @BeforeAll should be static. However, thanks to the annotation @TestInstance(TestInstance.Lifecycle.PER_CLASS), it can also be non-static.

    Insert: JUnit 5 lifecycle annotations at a glance

    Annotation When is it called?How often?
    @BeforeEach |individual test Before each Per test
    @AfterEachAfter each individual testPer test
    @BeforeAll Once before all tests in the class 1x
    @AfterAllOnce after all tests in the class1x

    Test cases

    Test case: Successful update of a post

    In this test case, we check whether the update() method works correctly in a positive test case. And this is the case if the post exists, the user is authorised and the update is successfully carried out in the database.

    @TestInstance(TestInstance.Lifecycle.PER_CLASS)
    @ExtendWith(MockitoExtension.class)
    class PostServiceTest {
        @Mock
        private UserService userService;
        @Mock
        private Utils utils;
        @InjectMocks
        private PostService postService;
        
        private JwtAuthenticationToken token;
        
        @BeforeAll
        void setup() {
            token = mock(JwtAuthenticationToken.class);
        }
        
        @Test
        @DisplayName("Should update post successfully")
        void shouldUpdatePost_whenUserIsAuthorized_andPostExists() {
            when(utils.wrapCall(any(CheckedFunction0.class), any(ErrorJson.class)))
                    .thenReturn(Either.right(mockUser))
                    .thenReturn(Either.right(mockPost));
    
            when(userService.getAuthorizedCreator(token, mockUser.getId()))
                    .thenReturn(Either.right(mockUser));
    
            Either<ErrorJson, Post> result = postService.update(postId, dto, token);
    
            verify(userService).getAuthorizedCreator(token, mockUser.getId());
            verify(utils, times(2))
                    .wrapCall(any(CheckedFunction0.class), any(ErrorJson.class));
    
            assertThat(result.isRight()).isTrue();
            assertThat(result.get().getId()).isEqualTo(postId);
        }
    }

    Analyse the update() method

    The method to be tested is located in the PostService.

        public Either<ErrorJson, Post> update(
                UUID id,
                UpdatePostDTO updatePostDTO,
                JwtAuthenticationToken jwtAuthenticationToken) {
            return utils.wrapCall(
                            () -> postRepository.findCreatorByPostId(id).orElseThrow(),
                            new ErrorNotFoundInDB("post", updatePostDTO.getId())
                    )
                    .flatMap(creator -> userService.getAuthorizedCreator(
                            jwtAuthenticationToken, creator.getId()
                    ))
                    .flatMap(user -> utils.wrapCall(
                            () -> postRepository.update(updateDtoToModel(
                                    id,
                                    updatePostDTO,
                                    user)),
                            new ErrorUnableToSaveToDB("post")
                    ));
        }

    Step 1: Mock findCreatorByPostId

    utils.wrapCall(
        () -> postRepository.findCreatorByPostId(id).orElseThrow(),
        new ErrorNotFoundInDB("post", updatePostDTO.getId())
    )

    In the test, this call is mocked as follows.

    when(utils.wrapCall(any(CheckedFunction0.class), any(ErrorJson.class)))
            .thenReturn(Either.right(mockUser))

    This means that the first time wrapCall is called, a mockUser is returned – the author of the post.

    Step 2: Authorisation check

    The method checks whether the user is authorised to make changes:

    .flatMap(creator -> userService.getAuthorizedCreator(
                jwtAuthenticationToken,
                creator.getId()
            ))

    The matching mock definition.

    when(userService.getAuthorizedCreator(token, mockUser.getId()))
            .thenReturn(Either.right(mockUser));

    The user is authorised – we are simulating the successful case here.

    Step 3: Update in the database

    The actual saving of the updated post looks like this.

    .flatMap(user -> utils.wrapCall(
            () -> postRepository.update(updateDtoToModel(id, updatePostDTO, user)),
            new ErrorUnableToSaveToDB("post")
    ));

    In the test, this is also mapped using wrapCall. As wrapCall was already used for the first step, we specify two returns in succession:

    when(utils.wrapCall(any(CheckedFunction0.class), any(ErrorJson.class)))
            .thenReturn(Either.right(mockUser))
            .thenReturn(Either.right(mockPost));

    Schritt 4: Assertions and verifications

    At the end, we check whether our expectations have been met – both in terms of content and methodological execution:

    verify(userService).getAuthorizedCreator(token, mockUser.getId());
    verify(utils, times(2))
        .wrapCall(any(CheckedFunction0.class), any(ErrorJson.class));
    
    assertThat(result.isRight()).isTrue();
    assertThat(result.get().getId()).isEqualTo(postId);

    The assertions ensure that:

    • The correct path has been traversed (mocks have been called)
    • The result is a successful -> Either.right(Post)
    • The returned post ID matches the expected one

    Other test cases: negative

    In addition to successfully testing the happy path, it is also important to cover error scenarios. These negative test cases ensure that our application behaves correctly even if something goes wrong or is not permitted.

    We will test three typical problem cases below:

    Test case: Post does not exist

    In this test, we simulate the case where the post for the specified ID is not found in the database. In this case, the wrapCall() method returns an Either.left with a suitable ErrorJson.

    We expect that:

    • No access to userService takes place, as the process is cancelled beforehand
    • The error is correctly returned to the caller
    @Test
    @DisplayName("Should return error if post does not exist")
    public void shouldReturnError_whenPostDoesNotExist() {
        ErrorJson notFound = new ErrorJson("Not Found", "Post not found", 404);
    
        when(utils.wrapCall(any(CheckedFunction0.class), any(ErrorJson.class)))
                .thenReturn(Either.left(notFound));
    
        Either<ErrorJson, Post> result = postService.update(postId, dto, token);
    
        verify(utils)
                .wrapCall(any(CheckedFunction0.class), any(ErrorJson.class));
        verifyNoInteractions(userService);
    
        assertThat(result.isLeft()).isTrue();
        assertThat(result.getLeft().getStatus()).isEqualTo(404);
    }

    Test case: User is not authorised

    This checks what happens if the current user is not authorised to edit the post. In this case too, the getAuthorisedCreator() method returns an Either.left with an error object.

    We make sure that:

    • The authorisation step is executed
    • But no attempt is made to save the post
    • A corresponding 403 error is returned
    @Test
    @DisplayName("Should return error if user not authorized")
    public void shouldReturnError_whenUserIsNotAuthorized() {
        when(utils.wrapCall(any(CheckedFunction0.class), any(ErrorJson.class)))
                .thenReturn(Either.right(mockUser));
    
        ErrorJson unauthorized = new ErrorJson("Access Denied", "You do not have permission", 403);
    
        when(userService.getAuthorizedCreator(token, mockUser.getId()))
                .thenReturn(Either.left(unauthorized));
    
        Either<ErrorJson, Post> result = postService.update(postId, dto, token);
    
        verify(utils).wrapCall(any(CheckedFunction0.class), any(ErrorJson.class));
        verify(userService).getAuthorizedCreator(token, mockUser.getId());
    
        assertThat(result.isLeft()).isTrue();
        assertThat(result.getLeft().getStatus()).isEqualTo(403);
    }

    Test case: Database error when saving

    Even if all the previous steps are successful, saving to the database can still fail. This test simulates exactly this case: wrapCall() during the save process returns an Either.left with a DB-specific error.

    We check that:

    • All steps are completed
    • The error from the last wrapCall() is correctly propagated to the caller
    @Test
    @DisplayName("Should return error if DB update fails")
    public void shouldReturnError_whenUpdateFailsDueToDatabaseError() {
        ErrorJson dbError = new ErrorJson("DB Error", "Could not save post", 500);
    
        when(utils.wrapCall(any(CheckedFunction0.class), any(ErrorJson.class)))
                .thenReturn(Either.right(mockUser))
                .thenReturn(Either.left(dbError));
    
        when(userService.getAuthorizedCreator(token, mockUser.getId()))
                .thenReturn(Either.right(mockUser));
    
        Either<ErrorJson, Post> result = postService.update(postId, dto, token);
    
        verify(utils, times(2))
                .wrapCall(any(CheckedFunction0.class), any(ErrorJson.class));
        verify(userService).getAuthorizedCreator(token, mockUser.getId());
    
        assertThat(result.isLeft()).isTrue();
        assertThat(result.getLeft().getStatus()).isEqualTo(500);
    }

    Conclusion

    In this section, we have shown how the service layer of a multi-tier Spring application can be tested. In doing so, we deliberately avoided Spring-specific features. The result: lean, high-performance and easy-to-read unit tests.

    By using Mockito, we were able to easily mock external dependencies and test the service logic in isolation. AssertJ was used to verify the results.

    In the next part of the series, we will learn how we can use test containers to test the repository layer of our Spring Boot application against a real database.

  • Introduction to Spring Boot Application Testing for Beginners – A Practical Guide (Part 1: Web-Layer)

    This is the first part of a three-part series that provides a practical introduction to testing Spring Boot applications. A social media app I developed serves as an example. The focus is on the post service, which maps the central business logic around posts.

    To make the whole thing more practical,  I have prepared a very simplified example application based on Spring Boot. So you can follow every step of the series hands-on and test it directly yourself.

    What can the Post Service do?

    • Creators can create, edit, delete and retrieve posts
    • Users may only read posts

    Architecture of the application

    The application follows a classic layered architecture:

    • RestController – defines the HTTP endpoints
    • Service – contains the business logic
    • Repository – communicates with the database

    System overview

    The overall system consists of three main components:

    • API Gateway
    • Keycloak + Identity Provider (IDP)
    • REST-API (Social Media App)

    Focus of this section: Web layer tests

    This section centres on testing the web layer, in particular the RestController. The following code snippet shows an excerpt from the PostController and the PostService:

    // PostController.java
    
    // ... other Imports and Code ...
    
    @Autowired
    private PostService postService;
    
    @Autowired
    private Utils utils;
    
    @PreAuthorize("hasAnyAuthority('ROLE_CREATOR', 'ROLE_ADMIN')")
    @PostMapping(
        produces = MediaType.APPLICATION_JSON_VALUE,
        consumes = MediaType.APPLICATION_JSON_VALUE
    )
    public ResponseEntity<?> create(
            JwtAuthenticationToken jwtAuthenticationToken,
            @RequestBody PostDTO postDTO
    ) {
        return utils.fold201(
            postService.save(postDTO, jwtAuthenticationToken)
                .map(post -> postService.modelToSingleResponseDTO(post))
        );
    }
    
    // ... other Code ...
    
    // PostService.java
    
    // ... other Imports und Code ...
    
    @Transactional
    public Either<ErrorJson, Post> save(PostDTO postDTO, JwtAuthenticationToken jwtAuthenticationToken) {
        return userService.getUserFromToken(jwtAuthenticationToken)
                .flatMap(user -> 
                    utils.wrapCall(
                        () -> postRepository.persist(DTOToModel(postDTO, user)),
                        new ErrorUnableToSaveToDB("post")
                    )
                );
    }
    
    // ... other Code ...


    Structure of the test class

    Before we look at the test cases in detail, we will first define the basic concept of our test class.

    We follow a layered test approach that is orientated towards the architecture of the application. In this section, we focus on unit tests of the controller. Integration tests will follow later.

    Especially with more complex applications, it is often impractical to test the controllers completely isolated from their dependencies. However, as the focus here is clearly on the behaviour of the controller, it is justified from a pragmatic point of view to continue to consider these tests as unit tests.

    Test context and annotations

    Spring loads an ApplicationContext for each test run, which can be time-consuming. It therefore makes sense:

    • Have as few different test contexts as possible → Use caching effects
    • Keep the contexts as small as possible → faster tests

    Since we only want to test the web layer at this point (not the entire application), we use a slice test with:

    @WebMvcTest(controllers = PostController.class)
    public class PostControllerTest {}

    Why not @SpringBootTest?

    @SpringBootTest
    @AutoConfigureMockMvc
    public class PostControllerTest {}

    This combination loads the entire ApplicationContext, including database, services, repositories etc. – which is unnecessary and inefficient if only a single controller is to be tested. @WebMvcTest is much leaner and more targeted here.

    Regardless of whether we use @SpringBootTest or @WebMvcTest, no real web server is started during the test runs. The explicit use of @SpringBootTest is an exception: @SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT). In this case, a web server is actually started – which can be helpful in certain debugging scenarios. However, this is not necessary for our purposes.

    Although no web server is running, we need a tool to send HTTP requests to our controller. This is where MockMvc or from Spring Boot 3.4 the MockMvcTester comes into play. These tools make it possible to simulate HTTP requests as if they were arriving via the network. The responses can then be checked using Hamcrest (classic) or AssertJ (from Spring Boot 3.4).

    Test of the web layer

    The tests focus on the external interface of our application – the web layer, specifically the PostController. But why is it even worth testing this layer separately? In many cases, there is a risk that security rules, routing errors or simple annotations such as @PreAuthorise, @RequestBody, @PostMapping etc. are implicitly assumed to work. In reality, however, it is often the case that:

    • Access rights are inadvertently set incorrectly (ROLE_USER instead of ROLE_CREATOR)
    • Security concepts such as CSRF or JWT verification are misconfigured
    • Controllers react too tolerantly or too restrictively to requests

    With web layer tests like this one, we have the opportunity to recognise these problems early on in the development process. And additionally because the test does not load the entire application, but only focuses on the controller (@WebMvcTest), it is lightweight and very fast.

    We can therefore summarise – these tests are valuable in the context of:

    • Security & authorisation → Do we make sure that only certain roles really have access?
    • Request structure & validation → How does the endpoint behave if fields are missing, the format is incorrect or the content type is invalid?
    • Error tolerance & response handling → Is an error handled correctly (e.g. 403 Forbidden, 400 Bad Request, 415 Unsupported Media Type)?

    These tests are no substitute for integration tests – but they provide a high level of security with little effort.

    @WebMvcTest(controllers = PostController.class)
    @Import(SecurityConfig.class)
    class PostControllerTest {
        @Autowired
        private MockMvcTester mockMvc;
    
        @MockitoBean
        private PostService postService;
    
        @TestConfiguration
        static class TestConfig {
            @Bean
            public Utils utils() {
                return new Utils();
            }
        }
        // ... other Code ...
    }

    If we look at the controller shown at the beginning, we recognise two dependencies: the PostService and the Utils class.

    Since we are not testing the service itself, but only the controller, we need to mock the PostService. This is done using the annotation @MockitoBean, whereby Spring automatically registers a mock object and injects it into the controller.

    We could also have mocked the Utils class in this way. In this case, however, I decided to bring the real implementation into the test context. To do this, we use a @TestConfiguration in which the Utils instance is defined as a bean. This configuration is read in when the test is started and the bean is automatically registered. As the Utils class is @Autowired in the controller, it is injected correctly.

    Another important point: In the controller, we secure the endpoints using @PreAuthorise. In order for this access control to work correctly in the test environment, our own SecurityConfig must be explicitly imported into the slice test. Otherwise, Spring will not load a complete security configuration as part of @WebMvcTest, which can lead to unexpected behaviour – such as failed authorisations despite a correct setup.

    Brief overview: Authentication and authorisation flow

    • The user first authenticates himself via Keycloak
    • After successful login, the user receives a bearer token (JWT)
    • Our SecurityConfig defines how the JWT is processed: We extract the user role from the resource_access.roles claim
    • Depending on this role (e.g. ROLE_ADMIN, ROLE_CREATOR, ROLE_USER), Spring Security decides whether a specific controller endpoint may be called
    • The resulting JwtAuthenticationToken is automatically available as a parameter in the controller and contains all relevant information about the user

    Test cases

    Test case: Successful creation of a post

    Let’s now look at the first test case. This tests the successful creation of a post via the corresponding POST endpoint.

    @WebMvcTest(controllers = PostController.class)
    @Import(SecurityConfig.class)
    class PostControllerTest {
    
        @Autowired
        private MockMvcTester mockMvc;
    
        @MockitoBean
        private PostService postService;
    
        @TestConfiguration
        static class TestConfig {
            @Bean
            public Utils utils() {
                return new Utils();
            }
        }
    
        @Test
        @DisplayName("Successfully: POST with ROLE_CREATOR")
        void createPost_successfulAsCreator() throws Exception {
            UUID userId = UUID.randomUUID();
            UUID postId = UUID.randomUUID();
            Post mockPost = createMockPost(postId, createMockUser(userId));
    
            when(postService.save(
                    ArgumentMatchers.any(PostDTO.class),
                    ArgumentMatchers.any(JwtAuthenticationToken.class))
            ).thenReturn(Either.right(mockPost));
    
            when(postService.modelToSingleResponseDTO(
                    ArgumentMatchers.any(Post.class))
            ).thenAnswer(invocation -> {
                Post post = invocation.getArgument(0);
                return createResourceResponse(post);
            });
            
            mockMvc.post()
                    .uri("/posts")
                    .content(buildPostRequest().toString())
                    .contentType(MediaType.APPLICATION_JSON)
                    .with(csrf())
                    .with(jwt().authorities(new SimpleGrantedAuthority("ROLE_CREATOR")))
                    .exchange()
                    .assertThat()
                    .hasStatus(HttpStatus.CREATED)
                    .hasContentType(MediaType.APPLICATION_JSON)
                    .bodyJson()
                    .hasPathSatisfying("$.data.item.title", path ->
                            path.assertThat().isEqualTo("Mock Title"))
                    .hasPathSatisfying("$.data.item.description", path ->
                            path.assertThat().isEqualTo("Description"));
        }
    
        // ... other tests or helper methods...
    }

    When the endpoint is called, the save method of the PostService is first called in the controller. If successful, this returns a Post object. Afterwards, the modelToSingleResponseDTO method is called in the service to transform the returned Post object into a response DTO.

    Since we mocked the PostService in the test, these methods would return null by default – or even throw a NullPointerException if their return values are reused. To avoid this, we need to explicitly tell Mockito what should happen when these methods are called.

    1. postService.save(…)
    Here we define that a call with any arguments (any(PostDTO.class) and any(JwtAuthenticationToken.class)) returns a prepared mockPost:

       when(postService.save(...)).thenReturn(Either.right(mockPost));

    2. postService.modelToSingleResponseDTO(…)
    This method is then called with the result of the save call (i.e. the mockPost). We use thenAnswer(…) instead of thenReturn(…), as we want to dynamically read the transferred post from the mock service in order to generate a DTO from it.

    when(postService.modelToSingleResponseDTO(any(Post.class)))
      .thenAnswer(invocation -> {
          Post post = invocation.getArgument(0);
          return createResourceResponse(post);
      });

    This ensures that the controller runs through its logic correctly and that a fully constructed response object is generated.

    csrf()
    As Spring Security activates CSRF protection by default, we must explicitly include a CSRF token in the test for state-changing HTTP requests such as POST, PUT or DELETE. If this is omitted, Spring rejects the request with a 403 Forbidden – even if the authentication is correct. The .with(csrf()) method ensures that a valid CSRF token is simulated and sent in the test.

    jwt()
    As our endpoints are secured with @PreAuthorise, Spring Security expects an authenticated context. We use .with(jwt()) to simulate a valid JWT-based login. In addition, we can use .authorities(…) to assign specific roles in order to test various access scenarios such as ROLE_USER, ROLE_CREATOR or ROLE_ADMIN.

    3. assertThat()
    The assertThat() method returns an MvcTestResultAssert object, which we can then use to formulate our assertions. The MockMvcTester API offers a more modern and much more readable syntax than the classic MockMvc, which relies on many static methods and a chained .andExpect(…) structure.

     // example: mockMvc with hamcrest
    
    mockMvc.perform(post("/posts")
            .with(csrf())
            .with(jwt().authorities(new SimpleGrantedAuthority("ROLE_CREATOR")))
            .content(buildPostRequest().toString())
            .contentType(MediaType.APPLICATION_JSON))
        .andExpect(status().isCreated())
        .andExpect(content().contentType(MediaType.APPLICATION_JSON))
        .andExpect(jsonPath("$.data.item.title").value("Mock Title"))
        .andExpect(jsonPath("$.data.item.description").value("Description"));
    
    // example: mockMvcTester with assertJ
    mockMvc.post()
           .uri("/posts")
           .content(buildPostRequest().toString())
           .contentType(MediaType.APPLICATION_JSON)
           .with(csrf())
           .with(jwt().authorities(new SimpleGrantedAuthority("ROLE_CREATOR")))
           .exchange()
           .assertThat()
           .hasStatus(HttpStatus.CREATED)
           .hasContentType(MediaType.APPLICATION_JSON)
           .bodyJson()
           .hasPathSatisfying(
                "$.data.item.title",
                 path ->path.assertThat().isEqualTo("Mock Title")
            )
           .hasPathSatisfying(
                "$.data.item.description",
                path -> path.assertThat().isEqualTo("Description")
            );

    Further test cases incorrect and unauthorised access

    In addition to the successful happy path test, it is also essential to cover error scenarios. These negative test cases ensure that our application behaves correctly even if something goes wrong or is not permitted. They make a significant contribution to safeguarding the API – especially in the areas of security and validation.

    We will test three typical problem cases below:

    Access with insufficient role (403 Forbidden)

    @Test
    @DisplayName("FAILURE: ROLE_USER is not allowed to create a post")
    void createPost_forbiddenForUserRole() throws Exception {
        mockMvc.post()
                .uri("/posts")
                .content(buildPostRequest().toString())
                .contentType(MediaType.APPLICATION_JSON)
                .with(csrf())
                .with(jwt().authorities(
                    new SimpleGrantedAuthority("ROLE_USER")
                 ))
                .exchange()
                .assertThat()
                .hasStatus(HttpStatus.FORBIDDEN);
    }

    In this test, a user with the ROLE_USER role is simulated. As only ROLE_CREATOR and ROLE_ADMIN may have access to the endpoint (see @PreAuthorise), access is rightly blocked with a 403 Forbidden.

    No token available (401 Unauthorised)

    @Test
    @DisplayName("No Token → 401 Unauthorized")
    void createPost_unauthorizedWithoutToken() throws Exception {
        mockMvc.post()
                .uri("/posts")
                .content(buildPostRequest().toString())
                .with(csrf())
                .contentType(MediaType.APPLICATION_JSON)
                .exchange()
                .assertThat()
                .hasStatus(HttpStatus.UNAUTHORIZED);
    }

    A request is sent here without authentication (no JWT). However, the Spring security configuration requires that only authenticated users have access. Therefore, as expected, we receive the HTTP status 401 Unauthorised.

    Incorrect content type (415 Unsupported Media Type)

    @Test
    @DisplayName("Wrong Content-Type → 415 Unsupported Media Type")
    void createPost_unsupportedMediaType() throws Exception {
        mockMvc.post()
                .uri("/posts")
                .content("test=1234")
                .contentType(MediaType.TEXT_PLAIN)
                .with(csrf())
                .with(jwt().authorities(
                    new SimpleGrantedAuthority("ROLE_CREATOR")
                 ))
                .exchange()
                .assertThat()
                .hasStatus(HttpStatus.UNSUPPORTED_MEDIA_TYPE);
    }

    In this case, a text/plain request is sent to a JSON endpoint. As the controller explicitly expects application/json (via @PostMapping(…, consumes = MediaType.APPLICATION_JSON_VALUE)),
    Spring rejects the request with 415 Unsupported Media Type.

    Conclusion

    In this part of the series, we showed how the web layer of a Spring Boot application can be tested in isolation using so-called slice tests. Through the targeted use of @WebMvcTest, we were able to focus exclusively on the controller level – without loading the complete ApplicationContext or other layers.

    In the next part of the series, we will look at the service layer. There we will learn how we can test the business logic. In the final third part, we will then take a look at integration tests for the repository layer, in which we simulate a real database environment using test containers.

    To make the whole thing more practical, I have prepared a very simplified example application based on Spring Boot. So you can follow every step of the series hands-on and test it directly yourself.