Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.

Source:https://github.com/SoraKumo001/next-streaming

⬅️ The Pentagon's Silicon Valley Problem
dosinga 32 daysReload
The examples in the article are rather cherry-picked. Failures in Vietnam can hardly be blamed on an IBM 360 only. The Hamas attack might have surprised Israel but the Iron Dome has been tech working well in recent years. The US warned anybody who wanted to listen (not many) that Russia was about to attack Ukraine. And it was a bunch of rather theoretical physicists who built the atomic bomb.

nkozyra 32 daysReload
I think - like a lot of media reporting on the space - this overgeneralizes (heh) artificial intelligence. The predictive aspects of ML have been in use in modern militaries for _decades_, and the opening graf handwavely indicates that an LLM was a bigger chunk of the perceived intelligence failure of the October 7 attack.

That an LLM is a part of a system that includes a large amount of ML is not surprising. It's a great human interface. Do I for a second believe that it played a much larger role, such to be implied as responsible in any non-negligble way for missing the attack. Of course not.

My point here is that ML continues to play a role, ML continues to both succeed and fail, and ML will continue to be imperfect, even moreso as it competes against adversarial ML. Blaming imperfect tools for inevitable failures is not a useful exercise, and certainly not a "problem" considering the alternative being even more failure-prone humans.


gorgoiler 32 daysReload
”The AI system knows everything about Hamas: what they said, what they published […] it analyzes behavior, predicts risks, and raises alerts.”

”Well aware of this Hamas members fed their enemy the data that they wanted to hear. The AI system, it turned out, knew everything about the terrorist except what he was thinking.”

When your opponent can see everything you do and hear everything you say, the only defence is privacy. In the novel The Three Body Problem this is taken to an extreme: the only privacy is inside the human mind and so select individuals are allowed to make decisions based on strategies known only to them which they have never said aloud. Science fiction has become reality.


Const-me 32 daysReload
I think the most important lesson, it’s borderline impossible to design any good system without clear use cases.

Ukraine has these use cases, also high motivation to tackle them. Ukrainians are controlling battlefield with commodity computers https://en.defence-ua.com/news/how_the_kropyva_combat_contro... They sunk multiple Russian warships with long-range naval drones https://www.bbc.com/news/world-europe-68528761 They recently started large-scale testing of cheap flying drones with computer vision-based target recognition on board https://www.forbes.com/sites/davidhambling/2024/03/21/ukrain...

However, US is at peace. Which is a great thing by itself, but it means it’s too easy for them to waste billions of dollars developing technologies which look awesome in PowerPoint, but useless in practice.