Helidon 4.0 with Java 21 and Virtual threads: Exploring the new features
Unlocking the future of Java and fast Microservices with Nima, the new Helidon Web Server
In October 2023, Helidon 4.0 was released, using a brand new server implementation with Project name Nima, leveraging the power of Java 21 virtual threads.
In this article I (re)visit Helidon, exploring what is new and how to understand the performance. I’ll compare both Helidon frameworks against Spring Boot, and I’ll explore the use of virtual threads compared to traditional threading.
The helidon-compare repository
To help you get started with your own testing, I updated Dmitry Aleksandrov’s original Compare repo to Helidon 4.0, and I added a few improvements to play around with virtual threads so you can easily compare the performance for chosen workloads.
Feel free to visit and download the repo here.
Getting started with Helidon 4.0
But before we start comparing frameworks, let’s see how to get a basic “Hello World” application up and running.
First you need to make sure you have Java 21 and maven 3.8 installed. See the Helidon 4.0 prerequisites document for more information. In this article I won’t be using the container features, so I’m skipping the installation of Docker and Kubernetes.
On the other hand, I will be using GraalVM to compile Native Images, so this is another required step to prepare for the tests. See the GraalVM guide for more info on the topic.
So now you’re ready to get started with Helidon 4.0!
There are 2 ways to get started with the MicroProfile (MP) style:
- The MP QuickStart Guide: this guide will propose to run a maven archetype command to generate an example project immediately, offering a set of basic features to get you started without any coding
- The MP Tutorial: this guide will take you step by step through the various files that constitute a Helidon project, starting with the pom file, adding the different java classes one by one, and giving you a deeper insight in the build-up of the project.
Each of these 2 paths will lead you to a server application that will listen on port 8080 and that you can call for example through curl to get your “Hello World” message in return.
Below you see a representation of the stack used by the SE and the MP framework, as well as the use of the Helidon Web Server and the Virtual threading.
Of course, these tutorials will only show you the tip of the iceberg in terms of available features, so make sure to browse the rest of the documentation to get a better understanding of the richness of the framework.
Comparing the different frameworks
In this experiment I want compare the start-up time of the server application, depending on the employed framework.
In the repo you will find 3 projects that offer the minimal “Hello world” functionality in the different technologies:
- Helidon MP
- Helidon SE
- Springboot
You can build and run the 3 frameworks in the classic way, or alternatively use GraalVM to compile an executable based on the Helidon MP project, resulting in a total of 4 options.
To build and run the frameworks, position your shell in each of the 3 projects and execute the below commands:
mvn package
java -jar target/helidon-quickstart-se.jar
For each project, you can now validate the server is indeed listening on port 8080 by issuing the below curl command:
curl -X GET http://localhost:8080/greet
This will produce the “Hello World” return message.
To build and run the GraalVM Native image, you can issue the below commands to launch the compilation of the executable for your platform, then run the application:
mvn package -Pnative-image
./target/helidon-quickstart-mp
Once all 4 flavours are producing the desired “Hello World” message, make sure to stop the applications to free port 8080.
You can now use the python script called measure.py to measure the time it takes to launch the application and get the first response. You specify which framework to use with the -f flag :
python measure.py -f HelidonMP
python measure.py -f HelidonSE
python measure.py -f SpringBoot
python measure.py -f GraalMP
This will result in 10 runs of starting the application, sending a request, and measuring the required time. Doing this for all 4 options and calculating the average per framework gives the following result:
You can see that Helidon MP is about 33% faster that Spring Boot, and Helidon SE starts in about half the time of Helidon SE … but the big winner is the GraalVM Native Image with a start-up time which is about 15 times faster than the standard Helidon MP!
Looking at Virtual Threads
One of the big changes in Java 21 is the introduction of virtual threads. This allows you to use the concept of threads in your application without actually consuming OS threads, handing over the management of these OS threads to the framework, and allowing you to write blocking calls in threads without consuming critical system resources.
To illustrate the effect of Virtual threads I included a 4th framework in the repository, called threadsSE, which allows you to easily observe the effect of virtual threading on the parallel processing power of an application.
If you build and run this application, you can use the standard port 8080/greet message to get your Hello World, but by specifying an extra parameter you can invoke a loop that will spawn the specified number of threads. In each thread you execute an arithmetic operation as well as a wait. You can invoke this process with either normal or virtual threads, and compare the results:
curl -X GET http://localhost:8080/greet/XYYYY-ZZZZ
- X can be: “V” for virtual threads, or “N” for normal threads,-
- YYYY is the number of threads to run in parallel,
- ZZZ is the length of the Sleep after each calculation in microseconds — can be 0 for no sleep.
So to run 3000 virtual threads in parallel with a sleep of 200 ms:
curl -X GET http://localhost:8080/greet/V3000-200
By playing with the parameters you can see the behaviour of the 2 types of threads:
- Normal threads taking much more time once the number of parallel threads goes up,
- When increasing the wait timer to more than a few seconds, normal threads will hit the limits of the number of available threads in your system, and an exception is thrown on the server.
Below you see a chart where the number of threads goes from 10 to 100,000, and a sleep of 200 milliseconds: for 100,000 executions, virtual threads are 18 times faster (1.3 seconds compared to 24.8 seconds):
Generating multiple sessions on the server process
In the above tests, the parallelism was created by spawning threads inside the server process. Another way to put stress on the server process is to use the wrk tool. This allows you to generate load on the server by specifying the desired number of parallel clients:
./wrk -c 100 -t 20 -d 10s http://localhost:8080/greet/V10-200
In the above example, I’m launching 20 client threads, opening in total 100 connections to the server, using the URL provided — in this case asking for 10 virtual threads and a delay of 200 ms.
Again you can play with the parameters, but you need to be aware of your machine’s limitations in terms of available connections, this is a limit that has nothing to do with the thread mechanism itself.
Conclusions
I had a lot of fun and got a much better understanding of the different frameworks and the power of virtual threads by developing the examples described in this article!
Feel free to simply repeat my experiments or complement it with relevant pieces of code from your applications to test the impact of the various implementation options on your developments!