JHipster – Making things a little less hip

Just like a good old Belgian beer can make for a nice change of pace after you’ve filled up on all those crafty IPAs and Stouts, it’s not always necessary to go for the latest and greatest. Last post saw us using Kafka as a message broker. In this blog post we’ll put a more traditional broker in between our thirsty beer clients and our brewery pumping out the happy juice! This blog is all about RabbitMQ! So let’s end this introduction and get started!
rabbitmq_logo
The final version of the code can be found here. Instead of building the whole thing from scratch like we did in the Kafka blog, we’ll be using a JHipster generator module this time.

JHipster Spring Cloud Stream generator

The JHipster Spring Cloud Stream generator can add RabbitMQ/Spring Cloud Stream support to our HelloBeer application. It uses the Yeoman Generator to do this.

Installation

Installation and running the generator is pretty straightforward. The steps are explained in the page’s README.md:

  • First install the generator
yarn global add generator-jhipster-spring-cloud-stream
  • Next run the generator (from the directory of our JHipster application) and accept the defaults
yo jhipster-spring-cloud-stream
  • Finally spin up the generated RabbitMQ docker file to start the
    RabbitMQ message broker
docker-compose -f src/main/docker/rabbitmq.yml up -d

Generated components

You can actually run the application now and see the queue in action. But before we do that let’s first take a look at what the generator did to our JHipster application:

  • application-dev.yml/application-prod.yml: modified to add RabbitMQ topic configuration;
  • pom.xml: modified to add the Spring Cloud Stream dependencies;
  • rabbitmq.yml: the docker file to spin up the RabbitMQ broker;
  • CloudMessagingConfiguration: configures a RabbitMQ ConnectionFactory;
  • JhiMessage: domain class to represent a message (with a title and a body) to be put on the RabbitMQ topic;
  • MessageResource: REST controller to POST a message onto the RabbitMQ topic and GET the list of posted messages;
  • MessageSink: Service class subscribes to the topic and puts received message in a List variable (the variable that gets read when issuing a GET via the MessageResource).

Running and testing

Alright, let’s test the RabbitMQ broker the generator set up for us. Run the JHipster application, login as admin user and go to the API page. You’ll see that a new message-resource REST service has been added to the list of services:

Screenshot from 2018-06-16 21-35-14

Call the POST operation a few times to post some messages to the RabbitMQ topic (which fills up the jhiMessages list):

Screenshot from 2018-06-16 21-37-48

Now, issue the GET operation to retrieve all the messages you POSTed in the previous step:

Screenshot from 2018-06-16 21-56-04

Cool! Working as expected. Now let’s get to work to put another RabbitMQ topic in place to decouple our OrderService (like we did with Kafka in our previous blog) again.

Replacing Kafka with RabbitMQ

rabbit-binder

Now we’re gonna put another RabbitMQ topic in between the Order REST service and the Order Service, just like we did with Kafka in our previous blogpost. Let’s leave the topic that the generator created in place. Since that guy is using the default channels, we’ll have to add some custom channels for our new topic that will handle the order processing.

First add a channel for publishing to a new RabbitMQ topic – we’ll be configuring the topic in a later step – and call it orderProducer:

public interface OrderProducerChannel {
  String CHANNEL = "orderProducer";

  @Output
  MessageChannel orderProducer();
}

We also need a channel for consuming orders for our topic. Let’s call that one orderConsumer:

public interface OrderConsumerChannel {
  String CHANNEL = "orderConsumer";

  @Input
  SubscribableChannel orderConsumer();
}

Now link those two channels to a new topic called topic-order in the application-dev.yml configuration file:

spring:
    cloud:
        stream:
            default:
                contentType: application/json
            bindings:
                input:
                    destination: topic-jhipster
                output:
                    destination: topic-jhipster
                orderConsumer:
                    destination: topic-order
                orderProducer:
                    destination: topic-order

The changes needed to be made in the OrderResource controller are similar to the changes we made for the Kafka setup. The biggest difference is in the channel names, since the default channels are already taken by the generated example code.
Another difference is that we put the EnableBinding annotation directly on this class instead of on a Configuration class. This way the Spring DI Framework can figure out that the injected MessageChannel should be of type orderProducer. If you put the EnableBinding on the Configuration class – like we did in our Kafka setup – you need to use Qualifiers or inject the interface – OrderProducerChannel – instead, else Spring won’t know what Bean to inject, since there are more MessageChannel Beans now.

@RestController
@RequestMapping("/api/order")
@EnableBinding(OrderProducerChannel.class)
public class OrderResource {

  private final Logger log = LoggerFactory.getLogger(OrderResource.class);
  private static final String ENTITY_NAME = "order";
  private MessageChannel orderProducer;

  public OrderResource (final MessageChannel orderProducer) {
    this.orderProducer = orderProducer;
  }

  @PostMapping("/process-order")
  @Timed
  public ResponseEntity<OrderDTO> processOrder(@Valid @RequestBody OrderDTO order) {
    log.debug("REST request to process Order : {}", order);
    if (order.getOrderId() == null) {
        throw new InvalidOrderException("Invalid order", ENTITY_NAME, "invalidorder");
    }
    orderProducer.send(MessageBuilder.withPayload(order).build());

    return ResponseEntity.ok(order);
  }
}

Again in our OrderService we also added the EnableBinding annotation. And again we use the StreamListener annotation to consume orders from the topic but this time we direct the listener to our custom orderConsumer channel:

@Service
@Transactional
@EnableBinding(OrderConsumerChannel.class)
public class OrderService {
  ....
  @StreamListener(OrderConsumerChannel.CHANNEL)
  public void registerOrder(OrderDTO order) throws InvalidOrderException {
    ....
  }
  ....
}

Building unit/integration tests for the RabbitMQ setup is not much different from the techniques we’ve used in the Kafka setup. Check my previous blog post for the examples.

Testing the setup

Alright, let’s test our beast again. These are the stock levels before:

Screenshot-2018-6-17 Item Stock Levels

Now let’s call the OrderResource and place an order of 20 Small bottles of Dutch Pilsner:

Screenshot from 2018-06-17 20-42-48

Check the stock levels again:

Screenshot-2018-6-17 Item Stock Levels(1)

Notice the new item stock level line! The inventory item went down from 90 to 70. Our RabbitMQ setup is working! Cheers!

Summary

In this blog post we saw how easy it is to switch from Kafka to RabbitMQ. The Spring Cloud Stream code mostly abstracts away the differences and didn’t change much. We also used a generator this time to do most of the heavy lifting. Time for a little vacation in which I’m gonna think about my next blog post. Again JHipster, check Spring Cloud Stream’s error handling possibilities or should I switch to some posts about other Spring Cloud modules? Let’s drink a few HelloBeerTM crafts and ponder about that!

References