Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.

Source:https://github.com/SoraKumo001/next-streaming

⬅️ Searchformer: Beyond A* – Better planning with transformers via search dynamics
a_wild_dandan 11 daysReload
Ah, I remember reading this paper! Essentially, they created synthetic data by solving search problems using A*. Trained on this data, transformers unsurprisingly learned to solve these problems. They then improved their synthetic data by repeatedly solving a given problem many times with A*, and keeping only the shortest step solution. Transformers learned to be competitive with this improved search heuristic too!

Pretty remarkable stuff. Given the obsession over chat bots, folk often miss how revolutionary transformer sequence modeling is to...well, any sequence application that isn't a chat bot. Looking solely at speeding up scientific simulations by ~10x, it's a watershed moment for humanity. When you include the vast space of other applications, we're in for one wild ride, y'all.


yeldarb 12 daysReload
> While Transformers have enabled tremendous progress in various application settings, such architectures still lag behind traditional symbolic planners for solving complex decision making tasks. In this work, we demonstrate how to train Transformers to solve complex planning tasks and present Searchformer, a Transformer model that optimally solves previously unseen Sokoban puzzles 93.7% of the time, while using up to 26.8% fewer search steps than standard A∗ search. Searchformer is an encoder-decoder Transformer model trained to predict the search dynamics of A∗. This model is then fine-tuned via expert iterations to perform fewer search steps than A∗ search while still generating an optimal plan. In our training method, A∗'s search dynamics are expressed as a token sequence outlining when task states are added and removed into the search tree during symbolic planning. In our ablation studies on maze navigation, we find that Searchformer significantly outperforms baselines that predict the optimal plan directly with a 5-10× smaller model size and a 10× smaller training dataset. We also demonstrate how Searchformer scales to larger and more complex decision making tasks like Sokoban with improved percentage of solved tasks and shortened search dynamics.

Neat; TIL about Sokoban puzzles. I remember playing Chip's Challenge on Windows 3.1 when I was a kid which had a lot of levels like that.


nextaccountic 11 daysReload
Due to the no free lunch theorem [0], any search algorithm that makes some problems faster will necessarily make other problems slower. How does the worst case for an algorithm like this look like?

I think that part of the appeal of A* to me is that I can readily visualize why the algorithm failed at some pathological inputs.

[0] https://en.wikipedia.org/wiki/No_free_lunch_in_search_and_op...


teleforce 11 daysReload
Previous post and discussions on HN:

Beyond A*: Better Planning with Transformers:

https://news.ycombinator.com/item?id=39479478