Devsoap - Devsoap

To subscribe to this RSS feed add to your RSS Feed reader!

Blog entries:

New beta releases out for both Vaadin Flow and Vaadin 8

Beta versions of Vaadin Gradle plugins released to support Gradle 6.

Two new beta releases was released this weekend for both Vaadin 10-14 and for Vaadin 8. The new released beta versions are:

Vaadin 8 :
DS Gradle Vaadin Plugin                                     2.0.0.beta2

Vaadin 10/14 :
DS Gradle Vaadin Flow Plugin                            1.3.0.beta4

Preparing for Gradle 6

Gradle 6 will bring with it some breaking changes to the Gradle plugin API that will not be backward compatible across major versions. To start moving towards that direction the new beta versions will now require Gradle 5.6 as the minimum version of Gradle.

This means that if your project is using an older version of Gradle you will need to upgrade the version. If you are using the Gradle Wrapper (which you should) you need to update the wrapper version and re-run the wrapper task.

Getting to stable

The plugins will remain in beta until Gradle 6 is released. At that point, the plugins will require Gradle 6 to achieve the longest possible forward compatibility.

Gradle support for Vaadin 14 now available for PRO subscribers

Gradle Vaadin Flow plugin 1.3 is now available with support for Vaadin 14.

DS Gradle Vaadin Flow Plugin 1.3.beta1 with Javascript module support is now available!

For those of you who haven't followed the Vaadin Flow releases the 14:th release is actually a totally new framework with a totally new client side stack. The whole Vaadin client side stack was re-vamped to be on top of Javascript rather than using HTML templates and the client package manager was changed.

This brought with it a lot of changes.

For the Gradle plugin this meant that the whole client side handling needed to be re-done to support the new way of handling the Polymer 3 Javascript modules. The plugin also needed to start to support the internal details of how Vaadin determines between compatibility mode and the new NPM mode.

A lot also have changed regarding the project structure and plugin usage but many things are still the same. I probably will not be able to answer all the questions here but I'll try to answer the most obvious questions below. If you have more don't hesitate to ask them in the comments section below.

Let's start with the elephant in the room.

Why do you now charge money for the Vaadin 14 support?

With the constant large scale changes done by Vaadin to the framework it is no longer possible to maintain the plugin for free.

A big thank you to Vaadin, the main sponsor, and other sponsors who have been funding the project so far, it has made it possible to make this project for the community.

But I believe the only way Open Source can sustainably work in the long run is that everyone using the software need to pitch in. And that leads me to the new PRO subscriptions.

First off, to avoid confusion, the Devsoap PRO subscription is not tied to, or linked to the Vaadin provided PRO subscription in any way. It is solely a Devsoap service.

Moving forward the PRO subscriptions will allow everyone to only pitch in a little to make a difference. The more people join the effort, the more time I (and maybe others) will be able to pitch in and work on the plugin to bring you new features and maintain the plugin.

This also will make it easier for us to provide money bounties for the Vaadin community to make use of when maintaining the plugin. (Spoiler alert: There is already one bounty available, continue reading to learn more ;) )

How do I take the plugin into use?

If you are using Vaadin 10 - 14 (compatibility mode) you do not need to do anything, you can continue to use the plugin for free by just updating the plugin version to 1.3.beta2. Those Vaadin versions will always be free in the future as well.

If you want to take Vaadin 14 (non-compatibility mode) into use you will need a PRO subscription. You can get that from

Once you've got the subscription credentials you need to add the following to the build.gradle file:


devsoap {
  email = <your email address you registred with>
  key = <your API key you received via email>

Once you got that set the plugin will work in non-compatiblity mode using NPM to resolve the dependencies.

As usual you can include the plugin by using the following in your gradle scripts:


plugins {
  id "com.devsoap.vaadin-flow" version "1.3.beta2"

For more information checkout the Getting started guide.

Is there any documentation available?

Some of the documentation has been already ported to Vaadin 14 in Devsoap Docs ( Most of the new articles are behind the [PRO] tag so to view them you will again need your PRO credentials.

The documentation is an on-going effort and will improve as the plugin will stabilize further.

Is there a migration tool available?

No, not yet.

My suggestion is that if your already have been developing for a while on Vaadin 10-13 and have a large code base just continue using Vaadin 14 in compatibility mode. There is currently nothing you can gain by starting a migration to JS components right now.

For new projects I would suggest going with Vaadin 14. The plugin provides a nice task, vaadinCreateProject that will create the necessery stubs for your new project so you get the classes and resources in the correct folders from the get go.

You can read more about the project creation in this docs article.

What are the known limitations of the Beta release?

Currently the plugin has a few limitations you should be aware of:

That is it for this release, I hope you didn't despair while waiting for NPM support and I do hope to you see you on Github.

I believe there are exciting times ahead for the Vaadin Gradle community!

Become a PRO (for the price of a coffee)

Introducing a new PRO subscription to get the features among the first, prioritized bug tickets, access to PRO only documentation and more.
Become a PRO (for the price of a coffee)

Writing software for the Open Source community and its developers is both fun and rewarding. You get to meet cool hombres and kick-ass chicks and at the same time do what we developers know best, code like a *****. This is why I've been doing this for so many years, and aim to do it for many more.

But at the same time, realities often set in and as with everything funding is needed for the most basic things. This is not only true for us coders, this is true for artists, musicians, politicians, lawyers and all other shady peeps. The difference is how transparent we are about it.

So, I've set up a PRO subscription which you can get for the price of one or two cappuccinos a month and with it you will get the following:

* I don't yet know how this would work as I am not gathering Github nicknames of PRO members. If you have any idea, let me know :)

Those are the initial things I came up with, if you can think of more just comment in the comment section below and I'll consider adding more features for the PRO members.

If you represent a school or other non-profit and think this is still too much but would like use the products, send a email to and explain your situation. I believe we should all help each other out where we can.

You can buy a subscription via the Store, located at It uses PayPal as the payment provider so you'll need a PayPal account.

That is all for now, stay tuned about upcoming PRO features this fall ;)

3 things to look out for when using Spring Framework

Learn to write Spring applications that also will be a joy to evolve and improve in the future as well.

I have for some years been involved in evolving Java Web applications that were written with early versions of the Spring Framework into more modern incarnations. When doing this I have constantly stumbled upon these issues that make the application hard to maintain and develop further.

1 Database schema is exposed via API

This is the issue that by far causes issues for the clients I work with.

The major problem is that the Spring framework tutorials drive developers to this pattern which might look like a simple and easy solutions and will look good in a presentation, but in the end, once the application matures, turns out to become a real headache both from a security and maintainability aspect.

A code smell of this is when you start asking questions like this:

There are multiple issues with exposing everything through the API layer.

From a security perspective the developer is no longer controlling what is exposed via the API from the application. All it takes for a developer to make a mistake is to add a new column to the database with some sensitive data and voilà, the sensitive information is immediately exposed via the end-point. You would need to have API schema tests that thoroughly tests that no unknown fields in the returned JSON are added to circumvent this. Most applications I've so far seen lack even the most basic tests, not to mention these kinds of corner-cases.

Another issue from a maintainability perspective is that when the database structure is directly exposed via the end-point, you are also locking the API down to whatever the database at that day happens to be.

As the application matures it is very likely you will need at some point a V2 of your API which returns a different JSON structure. This might occur when you add a new consumer of your service like another micro-service or a mobile client. The consumer of the service might have totally different needs of how your database schema looks like.

The way I've seen most developers solve this is to start adding extra transient fields to the existing entities returned by the repository. This of course affects V1 of the API that starts to get all this kind of extra information that it previously has not expected. Also the new clients will start getting all this extra information they need which was present in V1 of the API. The worst cases I've seen with this is that after you have added enough consumers and versions those entities need to be composed from almost every table in the database and the queries become slow and hard to understand.

Lets look at a better proven solution for this!

If you do not want to run into the above problems you cannot take the short path the Spring tutorials show. Forget about RestRepository, consider it an abstraction meant for demo purposes only.

You will need to use a layered approach to separate the data from the data representation returned via the API if you in the future want to have an better time maintaining and building on the API.

Instead, you could use a approach like this:

  1. Use a Repository for data layer only! Use it to fetch data only. A good indication of this is that your Entity classes does not contain any classes related to the JSON representation (like @JsonProperty) but only validation and query related annotations.
  2. Use a Service to mutate data, perform extra validations on the data. A good indication this is being done properly is that the service is the only class that accesses the repository. All data access goes through the service.
  3. Use a RestController to mutate data from entity returned by the service to the API JSON. Your RestController methods only return DTO's (Data-Transfer-Objects) and no entities. To convert between the entities and the DTO's use ModelMapper's TypeMaps! A good indication of good usage of DTO's is that the DTO does only contain JSON annotations and no database related annotations. Do not try to be smart and extend your Entity classes into DTO's ;)

Now, lets look at problems we had before and see how they can be solved with the above approach.

Now if a developer adds a new field to the database table it is exposed via the Entity class to the application but the API remains unchanged. The developer now have to manually add the field to the DTO if he wants to expose the information via the API as well. This forces the developer to think about what he is doing and weather it is a good idea or not.

Also the versioning problem goes away. Now that the API is using DTO's instead of Entity classes a new Controller or method can easily be added with a different DTO composed from the Entity. This means the application can provide different versions of the API without making changes to the data representation.

2 Data is only validated at database level or not at all

This is common scenario I also see where developers rely only on database constraints for data validation. Usually companies wake up to this only after there have been successful XSS or SQL injection attacks. At that point the data already is full of garbage and it will be really hard to get the data to become consistent and useful.

Another, leaner version of this is that validation is only done at one level, either at API or data level. The argument usually from developers is that it is wasteful to perform validation two times and validating what comes via the API should be enough.

However, I have seen it so many times that a simple coding mistake in the Controller or Service, or a security issue with the API validations has caused invalid data to end up in the database that could easily have been prevented.

If you have followed the solution in #1 to separate your API from your data layer then solving this should be as easy as applying the same validators to your DTO's as well as your Entity classes and always use @Valid for every input DTO in your Controller.

A good sign is that every field in both your DTO's and your Entity classes has some validator attached. This is especially true for String fields.

3 You don't need streams if you are not streaming data to the front-end or sending events

Most business applications today use REST as the main communication method both between services and between front-end and back-end. Unless you are working with a fully event driven architecture or you plan to move to one you should not pay much attention to the hype around streams today.

For applications related to processing numerical data (mostly IoT) they are useful, and most demos presented using streams are around this scenario where you have a numerical stream and you want to process that. By that is far from the CRUD application most businesses are using Spring for today and using the stream API to feed a CRUD with data and usually leads to these kinds of issues:

On the surface this might seem neat. But lets dig into that a bit.

In the repository you need to now rely on query hints that all database might or might not support. In this case the developer is messaging the database driver to return everything. Depending on how good the database driver is and how modern the database is (which might be pretty old in the enterprise) that might cause performance issues.

The service method no longer provides what type of entities it handles. While with non-streaming operations the service would return a list or Stream of entities, here we are providing an output stream only without any notion of what will happen to it. This would be a nightmare to test.

One key element with data persistance is transactions. Traditionally streaming and transactions have not worked together and only recently Spring has gain some support for them for MongoDB and R2DBC neither of which are majorly used for enterprise data. You can read more about the support in the Pivotal blog To summerize you lose transactions if you stream.

Finally, lets look at the controller. We are returning StreamingResponseBody instead of clear list or stream of entities. Again making testing harding and opaque.

Remember KISS. If your application does not rely on data streams you don't need the Spring streaming API. But by all means do not confuse this with Java Streams which can be useful when doing data transformations.

I hope these simple three observations will help you build applications that not only works today, but will allow your application to grow and evolve without major refactoring and security issues. To achieve this the best advice I can give you is that never use the latest sexy solution provided by the vendor if it does not give significant improvements that are easily modifiable in the future.

Product documentation now available!

Product documentation now available at!
Product documentation now available!

Using a library, extension or plugin can always be a hard without proper documentation. Especially if you are just jumping in to a new technology or language.

To better help developers get started with the DS products I am now happy to open up the Documentation site at!

The documentation site offers full technical documentation into the details of how to use the products and in some cases how to develop them further.

The documentation site also offers the possibility to comment on specific pages to help others or to suggest or clarify unclear topics.

I hope this will help everyone in getting to know the DS products and use them in your projects. If you have improvement ideas on how to make the documentation better feel free to comment below.

Happy reading!

TodoMVC: Fullstack serverless applications (Part 2: The REST API)

Learn how to write REST API's with serverless FN Project functions as well as connecting them to AWS DynamoDB.

In this article lets explore how we can build a REST API using FN functions. This is the second part of our TodoMVC app we started building in the previous post. A demo running this project can be found here.

To build our back-end we are going to do two things; set up the REST API end-points and connect them to DynamoDB where we are going to store our todo items.


The API we are going to create is a simple CRUD (Create-Read-Update-Delete) API for the Todo items.

There are multiple ways we could split this functionality up to FN functions.

  1. Handle all REST methods in one function
  2. Create one function for each REST method
  3. Create one function for read operations (GET) and one for write operations (PUT,POST,PATCH,DELETE)

Which approach you select depends on your use-case.

If we would have a lot of business logic then 2. might have been a better option as we could have split out our code based on operation. However, if we had done that then we wouldn't have been able to use pure REST as every function needs a unique path and with REST for example GET and POST might have the same path.

The third (3.) option might be interesting if we would anticipate that our application would have a lot of read requests but not that many write requests. By splitting it in this way we could load balance the read operations in a different way than write operations and maybe for read operations add more FN servers to provide a better throughput. With this approach you have the same downside as with 2. i.e. you will not be able to write a pure REST API.

We are going to select 1. as our business logic is really small and it allows us to use a single URL path for all operations we need for our TodoMVC app. We also don't anticipate a lot of requests so we don't have to care about load balancing.

Before we continue, lets recap how our project structure currently looks like after we added the UI logic in the previous post.


So to add the API we start by creating a new submodule in the existing project for our back-end functionality.


Next we will need to turn the module into a FN function to serve our REST API.

We start by removing any auto-generated src folder Intellij might have created for us. Then, open up the api/build.gradle file and add the following content:

 * We use ReplaceTokens to replace property file placeholders

 * Main FN function configuration
fn {
    functionClass = 'TodoAPI'
    functionMethod = 'handleRequest'
    functionPaths = ['/items']

 * Configure FN Function timeouts
fnDocker {
    idleTimeout = 30
    functionTimeout = 60

dependencies {
    compile 'com.amazonaws:aws-java-sdk-dynamodb:1.11.490'
    compile 'org.slf4j:slf4j-simple:1.7.25'

 * Replaces the AWS credential placeholders with real credentials
processResources {
            filter(ReplaceTokens, tokens: [
                'aws.accessKeyId' : System.getenv('AWS_ACCESS_KEY_ID') ?: project.findProperty('aws.accessKeyId') ?: '',
                'aws.secretKey' : System.getenv('AWS_SECRET_ACCESS_KEY') ?: project.findProperty('aws.secretKey') ?: '',
                'aws.region' : System.getenv('AWS_REGION') ?: project.findProperty('aws.region') ?: ''

Finally, we just invoke the :api:fnCreateProject task to create the function source stubs based on the previously created build configuration.


Now our project structure looks like this
Final project structure

We are now ready to implement the TodoAPI.

Persistence with AWS DynamoDB

Now that we have our function ready, lets implement the persistence layer.

The first thing we will need is to model how the Todo items should look like in AWS DynamoDB. We can do that by creating a model class ( that specifies how a single item is modeled:

@DynamoDBTable(tableName = "todomvc")
public class TodoItem implements Serializable {

    private String id = UUID.randomUUID().toString();
    private boolean active = true;
    private String description;

    public String getId() { return id; }
    public void setId(String id) { = id; }

    public boolean isActive() { return active; }
    public void setActive(boolean active) { = active; }

    public String getDescription() { return description; }
    public void setDescription(String description) { this.description = description;}

     * Helper method to create a TodoItem from an InputStream
    public static Optional<TodoItem> fromStream(InputStream stream) {
        try {
            return Optional.of(new ObjectMapper().readValue(stream, TodoItem.class));
        } catch (IOException e) {
            return Optional.empty();

     * Helper method to convert the items into a byte array
    public Optional<byte[]> toBytes() {
        try {
            return Optional.of(new ObjectMapper().writeValueAsBytes(this));
        } catch (JsonProcessingException e) {
            return Optional.empty();

This is pretty much a standard POJO with some DynamoDB specific annotations to help serialize the object. Our model is pretty simple, every item will only need to have two fields to keep track of; description and active.

The id field is only there to help us uniquely identify an item so we can modify or remove it. We could just as well have used the description field as our DynamoDB key, but that would have implied that we wouldn't be able to store duplicate items in our todo list.

Now that we have our item model, let's get back to the API implementation.

For our todomvc application we will need to support the following actions:

To do that we are going to modify our function in a bit to handle all those cases with a switch-statement:

public OutputEvent handleRequest(HTTPGatewayContext context, InputEvent input) throws JsonProcessingException {
    switch (context.getMethod()) {
        case "GET": {
            return fromBytes(new ObjectMapper().writeValueAsBytes(getItems()), Success, JSON_CONTENT_TYPE);
        case "POST": {
            return input.consumeBody(TodoItem::fromStream)
                    .map(bytes -> fromBytes(bytes, Success, JSON_CONTENT_TYPE))
        case "PUT": {
            return input.consumeBody(TodoItem::fromStream)
                    .map(bytes -> fromBytes(bytes, Success, JSON_CONTENT_TYPE))
        case "DELETE": {
            return input.consumeBody(TodoItem::fromStream)
                    .map(bytes -> fromBytes(bytes, Success, JSON_CONTENT_TYPE))
            return emptyResult(FunctionError);

As you can see we start by modifying our function to inject the HTTPGatewayContext as well as the InputEvent so we can process the request. From the context we get the HTTP method used to call the function and from the input event we get the HTTP request body.

Next, depending on which HTTP method was used, we convert the HTTP body into our TodoItem model and save it to the database.

To help us understand how this gets saved to the database, lets look at the rest of

public class TodoAPI {

    private static final String JSON_CONTENT_TYPE = "application/json";

    private final DynamoDBMapper dbMapper;

    public TodoAPI() {
        var awsProperties = getAWSProperties();
        var awsCredentials = new BasicAWSCredentials(
        var awsClient = AmazonDynamoDBClient.builder()
                .withCredentials(new AWSStaticCredentialsProvider(awsCredentials))

        dbMapper = new DynamoDBMapper(awsClient);

    public OutputEvent handleRequest(HTTPGatewayContext context, InputEvent input) throws JsonProcessingException {
    // Implementation omitted

    private List<TodoItem> getItems() {
        return new ArrayList<>(dbMapper.scan(TodoItem.class, new DynamoDBScanExpression()));

    private TodoItem updateItem(TodoItem item) {;
        return item;

    private TodoItem addItem(TodoItem item) {;
        return item;

    private TodoItem deleteItem(TodoItem item) {
        return item;

    private static Properties getAWSProperties() {
        var awsProperties = new Properties();
        try {
        } catch (IOException e) {
            throw new RuntimeException("Failed to load AWS credentials!", e);
        return awsProperties;

As you probably noticed, we set up a DynamoDBMapper using the credentials we have stored in a file called under our project resources.

If you check out the api/build.gradle file you will notice that we are populating the real credentials into the file at build time.

Once we have the DynamoDBMapper it is a trivial task to query DynamoDB for items as well as add, update and remove items. The mapper will handle all communication for us.

Wrapping up

This is pretty much all there is to create a REST API using a FN Function.

We can now run the project as we did in the first part.

The difference will now be that both the UI and the API functions will be deployed to the FN server. If you want to try out the REST API it will be available under http://localhost:8080/t/todomvc/items .

The sources for the full example which you can check out and directly run is available in here. You will need valid AWS credentials to try out the example as well as create a new DynamoDB instance to host your data.

Gradle Vaadin Flow 1.0 released!

Gradle Vaadin Flow 1.0 provides you with the most performant and easy to use build integration for Vaadin Flow applications today.

Gradle Vaadin Flow 1.0 provides you with the most performant and easy to use build integration for Vaadin Flow applications today. I'm happy to announce that the plugin now has reached production ready status!

After 17 pre-releases and countless testing and bugfixes it is about time the plugin gets a stable release. I know some of you have been eagerly waiting for this :)

It has been a joy working on the plugin and a big thank you goes out to those who have tested the plugin and given excellent feedback at such an early stage of the project. I don't think it would have been possible to iron out most of the edges without your help.

A big thank you also goes out to the project sponsors who have made this project possible. By providing Open-Source sponsoring for the project they have made it possible to work on this project and provide you with a Gradle integration for your Vaadin projects. If you want to join them be sure to check out the Sponsorship page to find out how you also could help out with the project funding.

Here is s short list of features it provides:

For more information check out the product page.

But of course we are not done yet, we are only getting started!

Now it is your turn to take the project into use and give feedback of what is still missing or what does not work. If there is a feature or tweak you would like or you spot a bug that is preventing you from using the plugin be sure to submit an issue into the issue tracker over at Github.

To read more about the different releases and what they contained be sure to check out the blog articles , example projects, or the project wiki.

Happy building!

TodoMVC: Fullstack serverless applications (Part 1: The UI)

Learn how to write fullstack serverless Java web applications with Fn Project and ReactJS.

In this two part blog series we are going to look at how we can serve a full web application by only using FN Project Java serverless functions. We are going to do this by writing a classic TodoMvc application all the way from the UI with React to the persistence leveraging Amazon DynamoDB. In this first part we are going to focus on building the front-end while in the second part we finish the application by creating an API for the UI.

Why serverless?

When thinking of "serverless" or FAAS (Function-As-A-Service) you might think that the primary benefit is its simplicity, you don't have to care about running an application server and can focus on writing application code. While that is partly true, I think there are even more, more substantial benefits that can be considered.


All serverless functions are stateless by design. Trying to save a state in a function simply will not work since after the function is executed the application is terminated and along with it all the memory it consumed. This means a lot less worries about memory leaks or data leaks and allows even junior developers to write safe applications.


Serverless as a paradigm is similar to what micro-services provide. A way of cleanly separating functionality into smaller units or Bounded Contexts as Martin Fowler so famously put it. Serverless functions allows you to do the same as micro-services, group functions into serverless applications (like the one I will be showing) with the benefits of writing less boiler-plate code than traditional micro-service frameworks.

Cost effective

A common way to host your applications is to purchase a VPC from a vendor like Digital Ocean, or set up an Amazon EC2 instance and what you pay for is ultimately how much memory and CPU you are using. A common micro-service approach then is to deploy the application on an embedded application server like Jetty or Tomcat and then further wrap that inside a Docker container. The downside of this is that once that is deployed it will actively consume resources even while nobody is using your application and every micro-service will actually contain a fully fledged application server. In contrast, serverless functions only consume resources while they are active which means that you actually only pay for what you need. You can even further optimize on a function-basis if you've split your application wisely into functions so that the most used functionality of your application gets higher priority (and resources) while the less used gets less.

Of course, using serverless functions is not a silver bullet and comes with some considerations.

If you have a high-volume application it might be wise to split your application into a few micro-services that take the most load as they are always active and then implement serverless functions around those services for the less used functionality. It is also worth noting that serverless functions comes with a ramp-up time, i.e if the function is not hot (it hasn't been invoked in a while), it will take a few more milliseconds for it to start as the docker container wakes up from hibernation and cause a slight delay. You can affect this by tweaking the function but more about that later.

Creating our TodoMVC project

For those impatient ones who just want to browse the code, the full source code for this example can be found here

And here is the application live:

You can open the application full screen in a new tab clicking here

Getting started

To create a new serverless app create a new Gradle project in Intellij IDEA and select Java. Like so:

Next we will need to configure our Gradle build to create Serverless applications.

In the newly created project, open up the build.gradle file and replace its contents with the following:

plugins {
    // For support for Serverless FN applications
    id 'com.devsoap.fn' version '0.1.7' apply false
    // For support for fetching javascript dependencies
    id "com.moowork.node" version "1.2.0" apply false

group 'com.example'

subprojects {

    // Apply the plugin to all sub-projects
    apply plugin: 'com.devsoap.fn'

    // We want to develop with Java 11
    sourceCompatibility = 11
    targetCompatibility = 11

    // Add Maven Central and the FN Project repositories
    repositories {

    // Add the FN function API dependency
    dependencies {
        compile fn.api()

As you probably already figured out we are going build a multi-module Gradle project where our sub-modules will be FN functions. To do that we leverage the Devsoap FN Gradle plugin as well as the Moowork Node plugin.

Also, you might want to remove any src folder that was generated for the parent project, our sources will be in the submodules.

Here is how it will look like:

Next, lets create our first function!

Right click on the project, and create a new UI module:

As we did before, remove any src folder which is automatically created.

Open up the ui/build.gradle file if it is not open yet, and replace the contents with the following:

apply plugin: 'com.moowork.node'

 * Configure FN Function
fn {
    // The name of the entrypoint class
    functionClass = 'TodoAppFunction'

    // To name of the entrypoint method
    functionMethod = 'handleRequest'

    // The available URL sub-paths
    functionPaths = [

Lets take a look at what this means.

On the first line we are applying the Node Gradle plugin. We are later going to use it to compile our front-end React application.

Then we configure the Fn function.

functionClass will be th main class of our UI, this is the class which is called when somebody accesses our application.

functionMethod is the actual method that will get called. This will host our function logic.

functionPaths are all the sub-paths our function will listen to. We will have to implement some logic to handle all of these paths.

Right, now we have our function definition, but we don't yet have our function sources. Lets create them.

From the right-hand side gradle navigation menu, open up the UI Fn tasks groups and double-click on fnCreateFunction.

Lets have a look at the created function:

import static java.util.Optional.ofNullable;

public class TodoAppFunction {

    public String handleRequest(String input) {
        String name = ofNullable(input).filter(s -> !s.isEmpty()).orElse("world");
        return "Hello, " + name + "!";

It by default generates a basic Hello world type of function which is not very exciting. Lets now add our function logic to it so it looks like this:

 * Serves our react UI via a function call
public class TodoAppFunction {

    private static final String APP_NAME = "todomvc";

     * Handles the incoming function request
     * @param context
     *      the request context
     * @return
     *      the output event with the function output
    public OutputEvent handleRequest(HTTPGatewayContext context) throws IOException {
        var url = context.getRequestURL();
        var filename = url.substring(url.lastIndexOf(APP_NAME) + APP_NAME.length());
        if("".equals(filename) || "/".equals(filename)) {
            filename = "/index.html";

        var body = loadFileFromClasspath(filename);

        var contentType = Files.probeContentType(Paths.get(filename));
        if(filename.endsWith(".js")) {
            contentType = "application/javascript";
        } else if(filename.endsWith(".css")) {
            contentType = "text/css";

        return OutputEvent.fromBytes(body, OutputEvent.Status.Success, contentType);

     * Loads a file from inside the function jar archive
     * @param filename
     *      the filename to load, must start with a /
     * @return
     *      the loaded file content
    private static byte[] loadFileFromClasspath(String filename) throws IOException {
        var out = new ByteArrayOutputStream();
        try(var fileStream = TodoAppFunction.class.getResourceAsStream(filename)) {
        return out.toByteArray();

Lets look at the function implementation a bit:

We create a helper method loadFileFromClasspath that will load any file from the current function classpath. By using the helper method we will be able to serve any static resources via our function.

Next, to the meat of the bones, the handleRequest method. This is the entry point method where all requests will arrive that are made to the function.

If you remember from the function definition we did previously, we assigned four sub-paths to the url; '/', '/favicon.ico', '/bundle.js',
and '/styles.css'. What we simply do in handleRequest is examine the incoming URL and extract the filename from it. Then, load the file from our classpath. In essence, the function we have created is a static file loader!

What about security, will this mean that you can now load any file via this function? The answer is of course no, you will only be able to call the function with the give sub-paths in the function definition. Any other paths will just not arrive to this function.

Including the static files

We now have our function, but it will not yet return anything as we don't yet have the static files we have defined in our function definition.

Lets start with our bootstrap HTML file we want to serve.

We create a file named index.html and place it under src/main/resources. By placing the file there it will be included in our function resources and can be found from the classpath by using our function we defined above.

<!DOCTYPE html>
<html lang="en">
        <meta charset="UTF-8">
        <title>TodoMVC - A fully serverless todo app!</title>
        <link rel="shortcut icon" href="todomvc/favicon.ico" />
        <link rel="stylesheet" type="text/css" href="todomvc/styles.css">
        <div id="todoapp" />
        <script src="todomvc/bundle.js"></script>

Pretty basic stuff, we define a favicon and a css style sheet in the head section and in the body we define the root div-element and the bootstrap.js script for our React app.

Next we create a CSS file under src/main/resources and call it styles.css. In it we define some styles for the application:

body {
    background: #f5f5f5;
    font-weight: 100;
.container {
    background: #fff;
    margin-left: auto;
    margin-right: auto;
h3 {
    color: rgba(175, 47, 47, 0.15);
    font-size: 100px;
    background: #f5f5f5;
    text-align: center;
    margin: 0;
.inner-container {
    border: 1px solid #eee;
    box-shadow: 0 0 2px 2px #eee
#new-todo {
    background: none;
    font-size: 24px;
    height: 2em;
    border: 0;
.items {
    list-style: none;
    font-size: 24px;
.itemActive {
    width: 2em;
    height: 2em;
    background-color: white;
    border-radius: 50%;
    vertical-align: middle;
    border: 1px solid #ddd;
    -webkit-appearance: none;
    outline: none;
    cursor: pointer;
.itemActive:checked {
    background-color: lightgreen;
.itemRemove {
    margin-right: 20px;
    color: lightcoral;
    text-decoration: none;
footer {
    line-height: 50px;
    color: #777
.itemsCompleted {
    padding-left: 20px;
.activeStateFilter {
    float: right
.stateFilter {
    cursor: pointer;
     padding: 2px;
} {
    border: 1px solid silver;
    border-radius: 4px

If you've done any webapps before this shouldn't be anything new.

Finally we download a nice favicon.ico file for our application and also place it under src/main/resources. You can find some nice ones from or design a new one yourself if you are the creative type. For our demo I chose this one.

Building the UI with React and Gradle

Now that we have our static files in place we still need to build our front-end React application.

We start by defining our front-end dependencies in a file called package.js in the root folder of the UI project. It will look like this:

    "name": "ui",
    "version": "1.0.0",
    "main": "index.js",
    "license": "MIT",
    "babel": {
        "presets": [
    "scripts": {
        "bundle": "webpack-cli --config ./webpack.config.js --mode=production"
    "devDependencies": {
        "@babel/core": "^7.2.2",
        "@babel/preset-env": "^7.3.1",
        "@babel/preset-react": "^7.0.0",
        "babel-loader": "^8.0.5",
        "css-loader": "^2.1.0",
        "html-webpack-inline-source-plugin": "^0.0.10",
        "html-webpack-plugin": "^3.2.0",
        "style-loader": "^0.23.1",
        "webpack": "^4.29.0",
        "webpack-cli": "^3.2.1"
    "dependencies": {
        "babel": "^6.23.0",
        "babel-core": "^6.26.3",
        "react": "^16.7.0",
        "react-dom": "^16.7.0",
        "whatwg-fetch": "^3.0.0"

This should be a very standard set of dependencies when building React apps.

Next we are going to use Webpack and Babel to bundle all our Javascript source files into one single bundle.js that also will get included in our static resources.

To do that we need to create another file, webpack.config.js in our UI root folder to tell the compiler how to locate and bundle our javascript files. In our case it will look like this:

var path = require('path');

module.exports = {
    entry: [
    output: {
        path: path.resolve(__dirname, './build/resources/main'),
        filename: 'bundle.js'
    module: {
       rules: [
           test: /\.(js|jsx)$/,
           exclude: /node_modules/,
           use: ['babel-loader']
     resolve: {
       extensions: ['*', '.js'],

There are two noteworthy things I should mention about this.

In the entry section we are pointing to a javascript source file that will act as our main application entry point. In a moment we are going to create that file.

In output we are setting the path where we want to output the ready bundle.js file. In our case we want to output to build/resources/main as that is what Gradle will use when packaging our function.

Note: We could also have set the path to src/main/resources and it would have worked. But it is a good idea to separate generated files we don't commit to version control from static files we want to commit to version control.

Now that we have our configurations in place, we still need to instruct our Gradle build to build the front-end. We do so by adding the following task to our build.gradle file:

 * Configre Node/NPM/YARN
node {
    download = true
    version = '11.8.0'

 * Bundles Javascript sources into a single JS bundle to be served by the function
task bundleFrontend(type: YarnTask) {
    inputs.file project.file('package.json')
    inputs.file project.file('yarn.lock')
    inputs.files project.fileTree('src/main/html')
    inputs.files project.fileTree('src/main/jsx')
    outputs.file project.file('build/resources/main/bundle.js')
    yarnCommand = ['run', 'bundle']

What this task will do is download all necessary client dependencies using Yarn (package manager) and then it will compile our sources into the bundle.js file.

The last line indicates that whenever we are building the function we should do this to ensure the latest bundle is included in the function distribution.

Now the only thing we are missing are the actual Javascript source files. So we create a new directory src/main/jsx and in it we place two source files:


import React from 'react';
import ReactDOM from 'react-dom';
import TodoList from './todo-list.js'

 * Todo application main application view
class TodoApp extends React.Component {

  constructor(props) {
    this.state = { items: [], filteredItems: [], text: '', filter: 'all' };
    this.handleChange = this.handleChange.bind(this);
    this.handleSubmit = this.handleSubmit.bind(this);
    this.handleActiveChange = this.handleActiveChange.bind(this);
    this.handleRemove = this.handleRemove.bind(this);
    this.handleFilterChange = this.handleFilterChange.bind(this);

  componentDidMount() {
        .then(result => { return result.json() })
        .then(json => { this.setState({items: json}) })
        .catch(ex => { console.log('parsing failed', ex) });

  componentWillUpdate(nextProps, nextState) {
        if(nextState.filter === 'all') {
            nextState.filteredItems = nextState.items;
        } else if(nextState.filter === 'active') {
           nextState.filteredItems = nextState.items.filter(item =>;
        } else if(nextState.filter === 'completed') {
           nextState.filteredItems = nextState.items.filter(item => !;

  render() {
    return (
      <div class="container">
        <div class="inner-container">
            <header class="itemInput">
                <form onSubmit={this.handleSubmit}>
                    placeholder="What needs to be done?"
            <section class="itemList">
                <TodoList items={this.state.filteredItems} onActiveChange={this.handleActiveChange} onRemove={this.handleRemove} />
            <footer class="itemControls">
                <span class="itemsCompleted">{this.state.items.filter(item =>} items left</span>
                <span class="activeStateFilter">
                    <span filter="all" class={this.state.filter === 'all' ? "stateFilter active" : "stateFilter"} onClick={this.handleFilterChange}>All</span>
                    <span filter="active" class={this.state.filter === 'active' ? "stateFilter active" : "stateFilter"} onClick={this.handleFilterChange}>Active</span>
                    <span filter="completed" class={this.state.filter === 'completed' ? "stateFilter active" : "stateFilter"} onClick={this.handleFilterChange}>Completed</span>

  handleChange(e) {
    this.setState({ text: });

  handleSubmit(e) {
    if (!this.state.text.length) {

    const newItem = {
      description: this.state.text,

    fetch('todomvc/items', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify(newItem)
    }).then(result => {
       return result.json();
    }).then(json => {
       this.setState( state => ({ items: state.items.concat(json), text: ''}) );
    }).catch(ex => {
      console.log('parsing failed', ex);

  handleActiveChange(newItem) {
    this.setState( state => ({
        text: '',
        items: => {
            if( === {
                fetch('todomvc/items', {
                      method: 'PUT',
                      headers: { 'Content-Type': 'application/json' },
                      body: JSON.stringify(newItem)
                }).then(result => {
                  return result.json();
               }).then(json => {
                  return json;
            return oldItem;

  handleRemove(item) {
    fetch('todomvc/items', {
          method: 'DELETE',
          headers: { 'Content-Type': 'application/json' },
          body: JSON.stringify(item)
    }).then( result => {
        this.setState(state => ({
            items: state.items.filter(oldItem => !=

  handleFilterChange(e) {
    var filter ="filter");
    this.setState( state => ({
        filter: filter


ReactDOM.render(<TodoApp />, document.getElementById("todoapp"));

And todo-list.json:

import React from 'react';

 * Todo list for managing todo list items
export default class TodoList extends React.Component {

  render() {
    return (
        { => (
          <li class="items" key={}>
             <input class="itemActive" itemId={} name="isDone" type="checkbox" checked={!}
                     onChange={this.handleActiveChange.bind(this)} />
             <span class="itemDescription">{item.description}</span>
             <a class="itemRemove" itemId={} href="#" onClick={this.handleRemove.bind(this)} >&#x2613;</a>

  handleActiveChange(e) {
    var itemId ='itemId');
    var item = this.props.items.find ( item => { return === itemId }) = !;
    console.log("Changed active state of "+item.description + " to " +;

  handleRemove(e) {
      var itemId ='itemId');
      var item = this.props.items.find ( item => { return === itemId })
      console.log("Removing item " + item.description);

Both of these javascript source files are pretty basic if you have done React before. if you haven't check any primer, the concepts used here are described in any resource available.

Now we have everything we need for our app. Lets have a final look how our project structure now looks like:
Final Project Structure

Running the project

Of course when we develop the project we also want to try it out on our local machine. Before you continue you will need Docker, so install that first.

To get your development FN server running, run the fnStart task from the root project:

FN Server start

Once the server is running you can deploy the function by double-clicking on the fnDeploy task.

Once the function is deployed you should be able to access it on http://localhost:8080/t/todomvc.

To be Cont’d!

We are now finished with the front-end function. But if you run the application we just made you will notice it is not working.

In the next part we will finish the application and hook our front-end function up to our back-end API and DynamoDB. Check it out!

2018 in Review

A summary of what has happened on in 2018.
2018 in Review

The year 2018 has soon come to an end and I think this is a good time to look at what has been accomplished this year before we move on to the next one.

We started the year in February by examining how we can improve keeping all those Maven dependencies in check and up to date by creating dependency version reports in Dependency version reports with Maven. In the article we learned how to leverage Groovy to read Maven plugin text reports and convert them to color encoded HTML reports.

In March the first version of the Gradle Vaadin Flow Plugin was released to support building Vaadin 10 projects with Gradle. The launch was described in Building Vaadin Flow apps with Gradle where we examined the basics of how the plugin worked.

In April the Gradle Vaadin Flow Plugin was improved to work with Javascript Web Components as can be read from here Using Web Components in Vaadin with Gradle.

In May the Gradle Vaadin Flow Plugin got its first support for Vaadin production mode. To read more about production mode check out this article Production ready Gradle Vaadin Flow applications.

We also examined how we can build robust, functional micro-services with Groovy and Ratpack in Building RESTful Web Services with Groovy. As a side note this has been the most read blog article the whole year so if you haven't read it yet, you have missed out!

In June the Gradle Vaadin Flow Plugin got support for Polymer custom styles as well as improvements to creating new Web components in Vaadin 10+ projects. The release notes (Gradle Vaadin Flow plugin M3 released) from that time reveals more about that.

In July we took a look at Gradle plugin development and how we can more easily define optional inputs for Gradle tasks in Defining optional input files and directories for Gradle tasks.

A new version of the Gradle Vaadin Flow Plugin was also released with new support for the Gradle Cache, HTML/Javascript bundling, Web Templates and Maven BOM support. Wow, what a release that was! The new features were described in Gradle Vaadin Flow plugin M4 released.

In September we took a look at using alternate JVM languages (Groovy and Kotlin) to build our Vaadin 10 applications with in Groovy and Kotlin with Vaadin 10.

While in October the Gradle Vaadin Flow Plugin got a new version again, this time with Spring Boot support and multi-module support.

The release also brought a controversial breaking change in requiring Gradle 5 due to the backward incompatible changes to the BOM support done in Gradle 5. However, it is starting to look like a good choice now that Gradle 5 is out and working for at least most of the community.

In late October or early November we also saw the second Devsoap product released. A new Gradle plugin Fn Project Gradle Plugin for building serverless functions on top of Oracle's FN server.

The plugin allowed to leverage Gradle to both develop and deploy the functions using all common JVM languages (Java, Groovy and Kotlin) both locally and to remotely running FN servers. The plugin is still in heavy development but already is used for projects around the world. To read more about the plugin checkout the article Groovy functions with Oracle Fn Project.

In November the Gradle Vaadin Flow Plugin went into Release Candidate state where the last bug fixes and improvements are still made to make the plugin a stable production ready release. This means it is very likely that early 2019 we will see the first stable release of the plugin so stay tuned ;)


Looking back that is a whole lot of new releases and articles to fit into 2018. Beyond that the year has seen a lot of more minor releases and a lot of discussions on Github and elsewhere regarding the products. It has been good to see the communities we are involved in has embraced these new ideas and I'm certainly looking forward to what 2019 will bring.

Have a good new year every one and see you in 2019!

Groovy functions with Oracle Fn Project

In this introduction I'll show you how you can easily start building your Fn Project functions with Groovy and Gradle.

The Fn Project is a new Open Source FAAS (Function-As-A-Service) framework by Oracle. In contrast to what Amazon or Google provides this framework is fully open source and can be set up on your local hardware or on any VPC provider. In this short introduction I'll show you how you can easily start building your functions with Groovy and Gradle.

The FN Project framework consists of many parts to be able to load balance and scale the framework infrastructure so it might seem daunting. But don't worry, you won't need any of that for this tutorial! We are only going to look into how we can develop a function and deploy it to a single server, the operations part can come later. There are a few things you need to install first though.

To be able to run the Fn Server you will need a Docker enabled host. So if you don't yet have Docker installed install it first.

You also will need to have Gradle installed.

You have Docker and Gradle installed now? Good!

Before we begin, the question we got to ask ourselves first is, what is a Fn function anyway?

A Fn function is in essence a small program that has some inputs (the HTTP Request) and from those inputs will produce some output (the HTTP Response). It does not really matter in which programming language the function is written as long as the inputs and outputs are defined. In fact the Oracle Fn Project is programming language agnostic and allows you to use any language you prefer if you want.

But if our functions can be written in any language, how can it all be deployed on the same server?

This is where Docker comes in. Every function is wrapped in a Docker image provided by the Fn Project that handles routing the HTTP request information from the FN Server to the function running in the Docker container and routing the response from your function back to the caller. This is what the Fn Project maintainers calls cloud native. While all this routing might sound tricky (and most likely is internally), for the function developer it is made fully transparent.

If you already took a look at the Fn Project documentation you most likely noticed that currently they offer the following language choices of writing functions:

They also offer a CLI for simplifying creating functions in those languages.

However, while all of those are good languages and the CLI is ok to use I felt that I wanted a bit more mature tool stack to work with, so I set out to write support for using Gradle to both build and deploy the function and allow developers also to leverage Groovy for writing functions.

Introducing a new Gradle plugin for building Groovy Fn functions

For a full introduction to the Gradle plugin see its Product Features page.

So lets have a look at how we can write our functions with Groovy and deploy it with Gradle!

Start by creating a new folder (in this example I used hello-fn as the folder name) somewhere on your system.

In that folder add an empty build.gradle file.

The new plugin is located the Gradle Plugin Portal so you can easily include it in your project by adding the following to your build.gradle:

plugins {
    id 'groovy'
    id 'com.devsoap.fn' version '0.1.7'

After you have applied the plugin the following tasks will be made available for your gradle project (you can check by running gradle tasks):

Fn tasks
fnCreateFunction - Creates a Fn Function project
fnDeploy - Deploys the function to the server
fnDocker - Generates the docker file
fnInstall - Installs the FN CLI
fnInvoke - Invokes the function on the server
fnStart - Starts the local FN Server
fnStop - Stops the local FN Server

Before we run any of the tasks, we still need to add a bit of configuration to our build.gradle so the Gradle plugin knows how to create a correct function.

fnDocker {
    functionClass = 'com.example.HelloWorld'
    functionMethod = 'sayHello'

We will also need some more dependencies so add those as well:

repositories {
    maven { url '' }

dependencies {
    compile 'org.codehaus.groovy:groovy-all:2.5.3'
    compile "com.fnproject.fn:api:1.0.74"

Right, now we are ready to run the fnCreateFunction task to create our function structure.

Once you have run the task it should have created the following folder structure:

├── build.gradle
└── src
    └── main
        └── groovy
            └── com
                └── example
                    └── HelloWorld.groovy

Not much boiler plate code there.

Lets have a look at the generated HelloWorld.groovy class:

package com.example

class HelloWorld {

    String sayHello(String input) {
        String name = input ?: 'world'
        "Hello, $name!"

It couldn't be much simpler than this, it is a simple class with one method, sayHello, that takes a string input and assumes it is a name and returns a greeting for that name.

Of course in real-world situations you will most likely want to do a lot more like reading request headers, setting response headers and generating different content-type payloads. This is all achievable using the Java FDK the FN Project provides.

Deploying the function locally

Now that we have our function, we most likely want to test it out locally before we put it in production.

To start the development server locally the Gradle plugin provides you with an easy task to use, fnStart. When you run that task it will download the CLI and start the FN Server on your local Docker instance.

$ docker ps
CONTAINER ID        IMAGE                COMMAND             CREATED             STATUS              PORTS                              NAMES
30c96567802a        fnproject/fnserver   "./fnserver"        5 seconds ago       Up 4 seconds        2375/tcp,>8080/tcp   fnserver

Once it successfully has started it should be running on port 8080. If you want to stop the server you can run fnStop which will terminate the server.

Once the server is running we can deploy our function there. This can be achieved by running fnDeploy.

It will first build a docker image and then deploy the image to the FN Server.

$ gradle fnDeploy
> Task :fnDeploy

Building image hello-fn:latest 

6 actionable tasks: 6 executed

If everything went fine the function should now be deployed and ready to use.

Testing our function locally

The plugin comes with a built in way of testing our running function.

By using the task fnInvoke we can issue a request to the function.

$ gradle fnInvoke

> Task :fnInvoke
Hello, world!

And if we post some input to the function:

$ gradle fnInvoke --method=POST --input=John

> Task :fnInvoke
Hello, John!

Of course the fnInvoke function is limited in what it can do and for more advanced use-cases we might want to use a separate app for testing queries (my favourite being Insomnia :) ).

To do that point your REST client to http://localhost:8080/t/<app name>/<trigger name> where in the example case the app name and trigger name is the same so the url would be http://localhost:8080/hello-fn/hello-fn. This is also what fnInvoke calls in the background.

Development tip: While developing the function locally you can make Gradle continuously monitor the source files and when you change something then it will automatically re-deploy the function. You can do that by running the fnDeploy task with the -t parameter like so gradle -t fnDeploy.

Taking the function to production

Once you have the function working locally you can deploy it to production quite easily by adding a few parameters to build.gradle.

fnDeploy {
  registry = '<docker registry url>'
  api = '<FN Server URL>'

Now if you run the fnDeploy task the function will be deployed to your remote Docker registry and FN Server.

And beyond...

This was just a short introduction into how you can work with Gradle and Groovy to make your functions. There are plenty of other fun things you can do with these functions, for example if you want to see a bit more advanced demo you can have a look at the CORS proxy example here

For more information about how to use the plugin see

If you find any issues do not hesitate to create an issue at

Thanks for reading, I hope to get your feedback on this project!

Gradle Vaadin Flow plugin M6 released

The Gradle Vaadin Flow M6 release brings Spring Boot and multi-module support for the plugin.

The Gradle Vaadin Flow M6 release brings Spring Boot and multi-module support for the plugin.

This will be the last Milestone release for the plugin, which after the plugin will start targeting the first stable 1.0 release aimed for production use!

Spring Boot support

The plugin now includes full support for building Spring Boot projects. To enable the support you will need to include the official Spring Boot plugin in your build.

plugins {
    id "org.springframework.boot" version "2.0.5.RELEASE"
    id "com.devsoap.vaadin-flow" version "1.0.0.M6"

Once you have the plugin applied you can creating a new Spring Boot project by just issuing the vaadinCreateProject command.

For a fully working example have a look a the project hosted at It is the standard flow-spring-tutorial converted to use Gradle and the plugin instead of Maven.

Multi-module support

This release also included improvements to how multi-module projects are handled.

Upcoming breaking changes

To guarantee the longest possible forward compatibility, M6 is the last release to support Gradle 4.x. For the 1.0 stable release, the plugin will start to require the use of Gradle 5. This means that the release of 1.0 will coincide with the release of Gradle 5 as close as possible.

To start preparing your project for Gradle 5 you can take Gradle 5 snapshots into use by using the Gradle wrapper with the latest snapshot versions of Gradle:

./gradlew wrapper --gradle-version=5.0-milestone-1

While M6 does not officially support Gradle 5 it should work for most use-cases already.

If you do see a warning, or something is broken then please report it to the issue tracker. This will allow us to build a stable first release on top of Gradle 5.


As usual the release is available to download from the Gradle Plugin Directory.

For more information about how to use the plugin see

If you find any issues do not hesitate to create an issue at

Gradle Vaadin Plugin to be included in Vaadin 11 Platform

Today it became offical that the Gradle Vaadin Plugin made by Devsoap Inc. will become the foundation that the next major Vaadin Framework 11 Platform release will be built upon.

As some of you already have suspected today it became offical that the Gradle Vaadin Plugin made by Devsoap Inc. will become the foundation that the next major Vaadin Framework 11 Platform release will be built upon.

This announcement was made by CEO Joonas Lehtinen from Vaadin late last night on Twitter:

Shortly there after a teaser video was released where in the beginning you can already see the plugin in action:

This sure will be exciting times for Vaadin community as they will be able to embrace the power of the Gradle eco-system and it is a pleasure to see the hard work the Vaadin community has made over the years paying off in this kind of big way!

For a full listing of what is planned for Vaadin 11 see

Exciting times ahead!


As usual the Gradle Vaadin Plugin is available to download from the Gradle Plugin Directory.

For more information about how to use the plugin see

If you find any issues do not hesitate to create an issue at

Groovy and Kotlin with Vaadin 10

Yet again a month has passed since the latest release of the Gradle Vaadin Flow plugin and now it is time for the fifth release of the plugin. This release focuses on exciting alternative JVM languages, Groovy and Kotlin!

Building Vaadin applications with Groovy

The plugin now provides templates in

Yet again a month has passed since the latest release of the Gradle Vaadin Flow plugin and now it is time for the fifth release of the plugin. This release focuses on exciting alternative JVM languages, Groovy and Kotlin!

Building Vaadin applications with Groovy

The plugin now provides templates in Groovy for all the creation tasks the plugin has. To enable the Groovy support you will need to include the official Groovy plugin in your build.

To do that add the following:

plugins {
    id 'groovy'  
    id 'com.devsoap.vaadin-flow' version '1.0.0.M5'


That is it, nothing more needed.

Note: If you are not using vaadin.autoconfigure() you will need to add the Groovy dependency to your compile classpath. Like this:

dependencies {
  compile vaadin.groovy()

Once you have the Groovy plugin in the project all tasks the plugin has will generate Groovy instead of Java. To try it out you can try creating a new project with gradle vaadinCreateProject you will see the following structure:

├── src
│   └── main
│       ├── groovy
│       │   └── com
│       │       └── example
│       │           └── vaadinflowplugintest
│       │               ├── VaadinFlowPluginTestServlet.groovy
│       │               ├── VaadinFlowPluginTestUI.groovy
│       │               └── VaadinFlowPluginTestView.groovy
│       └── webapp
│           └── frontend
│               └── styles
│                   └── vaadinflowplugintest-theme.css

All the other create* tasks will also now generate Groovy instead of Java, go ahead and try them out!

Building Web Templates with Groovy

One special feature the Groovy support brings is allowing you to build your Web Templates with Groovy.

If you run gradle vaadinCreateWebTemplate you will now see a new template file generated into src/main/webapp/frontend/templates ending with .tpl instead of .html. If you look into it it will look like this:

link (rel:'import', href: '../bower_components/polymer/polymer-element.html')

'dom-module' ( id:'example-web-template') {
    template {
        div (id:'caption', 'on-click' : 'sayHello') {
            yield '[[caption]]'

        class ExampleWebTemplate extends Polymer.Element {
            static get is() {
                return 'example-web-template'
        customElements.define(, ExampleWebTemplate);

The TPL file format is a DSL provided by the Groovy Markup Template Engine to allow you to define HTML files programmatically with Groovy. At build time these templates will automatically be converted into plain HTML files.

So why use them?

The DSL allows you to use any Groovy inside the templates, compose templates into each other, inject file system properties and everything you normally can do with Groovy. No more error prone manual editing of HTML files needed, the Groovy compiler will notify you of errors at compile time if your syntax is wrong, not at runtime in production.

The TPL support will for now only be available for Groovy projects, but in the future it might also be enabled for other project types.

Kotlin support

As well as Groovy support, also Kotlin support has been added to the plugin.

Similarly to how the Groovy support works, you need to include the official Kotlin plugin to enable the Kotlin features.

plugins {
    id 'org.jetbrains.kotlin.jvm' version '1.2.61'
    id 'com.devsoap.vaadin-flow' version '1.0.0.M5'

Once you have that in your build.gradle you can create a Kotlin project with vaadinCreateProject and you will get the same file structure as with the Groovy project.

The other tasks will also provide Kotlin templates.

If you are building your Vaadin applications with Kotlin I can also highly recommend looking into Vaadin On Kotlin maintained by Martin Vysny (


As usual the release is available to download from the Gradle Plugin Directory.

For more information about how to use the plugin see

If you find any issues do not hesitate to create an issue at

Gradle Vaadin Flow plugin M4 released

This release focuses on improving production mode as well as client side dependency handling. It improves caching as well as adds support for Web templates.

This release focuses on improving production mode as well as client side dependency handling. It improves caching as well as adds support for Web templates.

Here is a more detailed overview of what improvements and features the new release contains:

Improved Yarn/Bower integration

The plugin client side handling has now been re-written to be fully built on top of Yarn instead of NPM.

This has allowed the plugin to take into use Yarn's caching and repository offline mirroring capabilities providing more stable, reproducable and faster builds.

Gradle Cache support

The plugin now supports the Gradle cache. The cache is leveraged when in production mode to store the component directories (bower_components and node_modules) as well as the transpiled results and other static resources. This should speed up the builds considerable and allowes pre-building production mode applications.

To enable the Gradle cache add --build-cache when running Gradle from the command line or put org.gradle.caching=true in your file.

Support for HTML/Javascript bundling

Previously the HTML and Javascript were not bundled when generating the transpiled production mode. That made the application make several HTTP requests for every component used. This again was causing the application startup to be slow.

Now Polymer Bundler is used, compressing all component javascript into one downloadable HTML file that can be cached by the browser.

Support for Web Templates

One new feature of Vaadin 10 is building components using pure HTML templates (Polymer Templates).

To create a Web Template the following task can be used:

gradle vaadinCreateWebTemplate --name MyWebTemplate

This will create two files; a Java class that contains the server side logic of the component and a MyWebTemplate.html in src/main/webapp/templates/ which contains the client side logic (HTML+JS).

The templates work as an alternative way of creating simple Vaadin components. For more advanced components separating between view and representation is highly suggested.

For more information about the templates see

Improved Maven BOM support

The plugin's BOM support is now taking advantage of the latest Gradle BOM features. That means that if you want to use the Vaadin BOM you will now need to add a settings.gradle file to your project with the following contents:



As usual the release is available to download from the Gradle Plugin Directory.

For more information about how to use the plugin see

If you find any issues do not hesitate to create an issue at

Defining optional input files and directories for Gradle tasks

Currently Gradle does not support defining input files and directories but with a few tricks it is easy to do.

To define a input file for a task you usually use either @InputFile or @InputDirectory depending on if you require a file or directory as input. However, this only works when the directory really exists on the file system, if the file or directory does not exist then this will throw an exception.

So how do we go about defining optional input files?

Turns out there does not exist a built-in solution for this but with a little imagination we can accomplish it with the @Optional annotation and a Closure.

Here is an example:

final Closure<File> inputDir = { new File('/path/to/dir').with { it.exists() ? it : null }

So why does this work?

The @Optional annotation defines that the property is optional in a sense that it either has a value or can be null to define the value is missing and the input should be ignored.

By using @Optional we prevent Gradle from complaining about a missing input value but it still will not solve the problem as we still can define a File which does not exist and Gradle will still complain about an @InputDirecory that does not exist.

To solve the second issue we are going to use a closure with where we mutate the File instance to either be returned if it exists or else we return null to denote it does not exist.

We use the .with{} method to mutate the instance and since we are returning null in the case the File does not exist then @Optional definition is completed and Gradle understands that this input should not be considered.

So why did we use a Closure instead of assigning the File directly to the field?

The reason is that if we would have assigned it directly the existance of the file would have been checked at object construction time. With the Closure in place we have deferred the evaluation until the value is actually needed by Gradle. By doing this we have a possibility to create the directory if it does not exist in a previous task before Gradle evaluates the @InputDirectory value.