v0.7.4 - Running Infinitic on a managed Pulsar
Last news regarding Infinitic, the easiest way to reliably orchestrate your distributed services at scale
Hi!
I’m pleased to announce that release 0.7.4 of Infinitic is now live! In a nutshell:
It's now easy to run Infinitic on various Pulsar on the cloud offerings
The Infinitic client was improved with a way to send messages synchronously
An in-memory implementation is now proposed for easing the development
Running Infinitic on a managed Pulsar
This release should bring joy to users of Pulsar in the cloud as it's now easy to use Infinitic with a managed Pulsar. To do so:
I have extended the Pulsar configuration in Infinitic with an `authentication` section that allows you to connect with a secure Pulsar cluster online easily.
Workers now create missing tenant/namespace/topics at startup. I had to do this because some service providers disable topics auto-creation. This brings two additional good news:
You no longer need to set up a namespace before running any workflow. The worker will check its existence and try to create it at the start time. This eases the deployment to production.
When you close a client, Infinitic can now more efficiently delete the topic created for each client to receive synchronous responses. Previously, with topic auto-creation enabled, this topic was often automatically recreated when an engine tried to send a result to an already disconnected client.
I did my best to propose a versatile code that works in various environments having different security settings. It has been successfully tested on CleverCloud, Datastax, and Streamnative.
Sending messages synchronously to Pulsar
Per default, the Infinitic client dispatches a task or a workflow *asynchronously* to avoid blocking the current thread while sending a message to Pulsar:
Deferred<String> d = client.async(wf) { method(...) }
One drawback of this is that we can lose some messages when the client closes. Also, in some situations, you want to be sure that the client has actually sent your message to Pulsar.
This release brings a new `join()` method that you can apply on a Deferred to wait up to the reception by Pulsar of the event:
Deferred<String> d = client.async(wf) { method(...) }.join()
Also, you can apply this method to the Infinitic client itself to wait until Pulsar receives all messages sent to it by this client.
client.join()
Infinitic does this automatically when the client is closed to ensure that no messages are lost due to client closure.
In-memory implementation for an easier development experience
Running a local dockerized Pulsar during development is cumbersome. It can easily use 4-5GB of Ram of your machine. When developing, it's now possible to process your workflows without Pulsar by using an in-memory implementation and without changing anything in your code!
To do so, import the new infinitic-factory module and use this method for creating an Infinitic client:
InfiniticClient client = InfiniticClientFactory.fromConfigFile("infinitic.yml")
By default, an implementation of `PulsarInfiniticClient` will be provided, but if you add `transport: inMemory` in your infinitic.yml file, then an InMemoryInfiniticClient instance is returned. This client also instantiates an InMemoryWorker internally and configured it according to your infinitic.yml configuration file.
From there, when the client dispatch tasks or workflows, it will also process them directly in memory.
This feature is possible because Infinitic was developed from the beginning using an abstracted transport layer.
As usual, please answer directly to this email for any feedback; it means a lot to me ❤️.