Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.
That an LLM is a part of a system that includes a large amount of ML is not surprising. It's a great human interface. Do I for a second believe that it played a much larger role, such to be implied as responsible in any non-negligble way for missing the attack. Of course not.
My point here is that ML continues to play a role, ML continues to both succeed and fail, and ML will continue to be imperfect, even moreso as it competes against adversarial ML. Blaming imperfect tools for inevitable failures is not a useful exercise, and certainly not a "problem" considering the alternative being even more failure-prone humans.
”Well aware of this Hamas members fed their enemy the data that they wanted to hear. The AI system, it turned out, knew everything about the terrorist except what he was thinking.”
When your opponent can see everything you do and hear everything you say, the only defence is privacy. In the novel The Three Body Problem this is taken to an extreme: the only privacy is inside the human mind and so select individuals are allowed to make decisions based on strategies known only to them which they have never said aloud. Science fiction has become reality.
Ukraine has these use cases, also high motivation to tackle them. Ukrainians are controlling battlefield with commodity computers https://en.defence-ua.com/news/how_the_kropyva_combat_contro... They sunk multiple Russian warships with long-range naval drones https://www.bbc.com/news/world-europe-68528761 They recently started large-scale testing of cheap flying drones with computer vision-based target recognition on board https://www.forbes.com/sites/davidhambling/2024/03/21/ukrain...
However, US is at peace. Which is a great thing by itself, but it means it’s too easy for them to waste billions of dollars developing technologies which look awesome in PowerPoint, but useless in practice.