Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.

Source:https://github.com/SoraKumo001/next-streaming

⬅️ 12-factor Agents: Patterns of reliable LLM applications
pancsta 3 daysReload
Very informative wiki, thank you, I will definitely use it. So Ive made my own "AI Agents framework" [0] based on actor model, state machines and aspect oriented programming (released just yesterday, no HN post yet) and I really like points 5 and 7:

    5: Unify execution state and business state
    8. Own your control flow
That is exactly what SecAI does, as it's a graph control flow library at it's core (multigraph instead of DAG) and LLM calls are embedded into graph's nodes. The flow is reinforced with negotiation, cancellation and stateful relations, which make it more "organic". Another thing often missed by other frameworks are dedicated devtools (dbg, repl, svg) - programming for failure, inspecting every step in detail, automatic data exporters (metrics, traces, logs, sql), and dead-simple integrations (bash). I've released the first tech demo [1] which showcases all the devtools using a reference implementation of deepresearch (ported from AtomicAgents). You may especially like the Send/Stop button, which is nothings else then "Factor 6. Launch/Pause/Resume with simple APIs". Oh and it's network transparent, so it can scale.

Feel free to reach out.

[0] https://github.com/pancsta/secai

[1] https://youtu.be/0VJzO1S-gV0


mgdev 3 daysReload
These are great. I had my own list of takeaways [0] after doing this for a couple years, though I wouldn't go so far as calling mine factors.

Like you, biggest one I didn't include but would now is to own the lowest level planning loop. It's fine to have some dynamic planning, but you should own an OODA loop (observe, orient, decide, act) and have heuristics for determining if you're converging on a solution (e.g. scoring), or else breaking out (e.g. max loops).

I would also potentially bake in a workflow engine. Then, have your model build a workflow specification that runs on that engine (where workflow steps may call back to the model) instead of trying to keep an implicit workflow valid/progressing through multiple turns in the model.

[0]: https://mg.dev/lessons-learned-building-ai-agents/


hhimanshu 3 daysReload
I am wondering how libraries like DSPY [0] fits in your factor-2 [1]

As I was reading, I saw mention of BAML > (the above example uses BAML to generate the prompt ...

Personally, in my experience hand-writing prompts for extracting structured information from unstructured data has never been easy. With DSPY, my experience has been quite good so far.

As you have used raw prompt from BAML, what do you think of using the raw prompts from DSPY [2]?

[0] https://dspy.ai/

[1] https://github.com/humanlayer/12-factor-agents/blob/main/con...

[2] https://dspy.ai/tutorials/observability/#using-inspect_histo...


daxfohl 3 daysReload
This old obscure blog post about framework patterns has resonated with me throughout my career and I think it applies here too. LLMs are best used as "libraries" rather than "frameworks", for all the reasons described in the article and more, especially now while everything is in such flux. "Frameworks" are sexier and easier to sell though, and lead to lock-in and add-on services, so that's what gets promoted.

https://tomasp.net/blog/2015/library-frameworks/


daxfohl 3 daysReload
Another one: plan for cost at scale.

These things aren't cheap at scale, so whenever something might be handled by a deterministic component, try that first. Not only save on hallucinations and latency, but could make a huge difference in your bottom line.