Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.
How does this compare to the sibling project https://zeminary.com/matrix/app.html?
llama.cpp
- idk why there's so many llama versions to install on yay
- i went with llama.cpp-bin, because it was built with libcurl and the first one i tried apparently was not
- but i had to remove llama.cpp-git-debug from a previous installation
- remember yay -Q | grep ... to check for installed packages
- the cli interface changes; i ended up with --hf-repo ggml-org/qwen2.5... --hf-file qwen25....
- the huggingface.com page probably has the most accurate and up-to-date instructions
- my goal: fast, offline, generalized/automatic autocomplete
- localhost:8080 to access web ui after running llama-server
They're quick notes, and they actually help me problem solve, not disrupt me. I'm casual about it, though. I'm not copying every input and output verbatim. I think the idea is to leave yourself enough breadcrumbs so that you can reproduce, grab screenshots, and copy error messages later when you're not in the zone. Hope this helps.Also, note that I'm most likely to publish a blog post in the following days while the problem is still fresh in my mind. If I wait months or years, it's pretty much doomed to stay in /drafts forever.
It's a GitHub Action that regularly scrapes your page and checks for new entries, turning it into an RSS feed. Set up a redirect yourblog.com/feed -> you.github.io/feeds/feed.xml, then you're golden.
I encourage anybody that likes geometry and puzzles to give modular origami a try. You just need lots of little pieces of paper, and some patience. Then you end up with fun little desk ornaments: https://wonger.dev/assets/origami.jpg
Grieving while being a new mom must be brutal.