Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.
1. tasks are not explicitly called from another task In your example greet() is never called, instead task with id=greet will be pushed to the queue
2. The reason I opted for distributed task approach is precisely to eliminate await task_1 await task_2 ...
Going to the point 1, task_2 just says to the engine, ok buddy, now it is time to spawn task_2. With that semantics we isolate tasks and don't deal with the outer tasks which calls another tasks. Also, parallel task execution is extremely simply with that approach.
3. Deadlocks will happen iff you will wait for the data that is never assigned, which is expected. Otherwise, with the design of state and engine itself, they will never happen.
https://github.com/lmnr-ai/flow/blob/main/src/lmnr_flow/stat...
https://github.com/lmnr-ai/flow/blob/main/src/lmnr_flow/flow...
4. For your last point, I would argue the opposite is true, it's actually much harder to maintain and add new changes when you hardcode everything, hence why this project exists in the first place.
5. Regarding deployment. Flow is not a temporal-like (yet), everything is in-memory and but I will def look into how to make it more robust
For some reason some LLM specific examples just slipped of my mind because I really wanted to show the barebone nature of this engine and how powerful it is despite its simplicity.
But you also right, it's general enough that you can build any task based system or rebuild complex system with task architecture with Flow.
Signal is clear, add more agent specific examples.