Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.

Source:https://github.com/SoraKumo001/next-streaming

⬅️ The slow collapse of critical thinking in OSINT due to AI
Aurornis 13 hoursReload
> Participants weren’t lazy. They were experienced professionals.

Assuming these professionals were great critical thinkers until the AI came along and changed that is a big stretch.

In my experience, the people who outsource their thinking to LLMs are the same people who outsourced their thinking to podcasts, news articles, Reddit posts, Twitter rants, TikTok videos, and other such sources. LLMs just came along and offered them opinions on demand that they could confidently repeat.

> The scary part is that many users still believed they were thinking critically, because GenAI made them feel smart

I don’t see much difference between this and someone who devours TikTok videos on a subject until they feel like an expert. Same pattern, different sources. The people who outsource their thinking and collect opinions they want to hear just have an easier way to skip straight to the conclusions they want now.


Animats 7 hoursReload
The big problem in open source intelligence is not in-depth analysis. It's finding something worth looking at in a flood of info.

Here's the CIA's perspective on this subject.[1] The US intelligence community has a generative AI system to help analyze open source intelligence. It's called OSIRIS.[2] There are some other articles about it. The previous head of the CIA said the main use so far is summarization.

The original OSINT operation in the US was the Foreign Broadcast Monitoring Service from WWII. All through the Cold War, someone had to listen to Radio Albania just in case somebody said something important. The CIA ran that for decades. Its descendant is the current open source intelligence organization. Before the World Wide Web, they used to publish some of the summaries on paper, but as people got more serious about copyright, that stopped.

DoD used to publish The Early Bird, a daily newsletter for people in DoD. It was just reprints of articles from newspapers, chosen for stories senior leaders in DoD would need to know about. It wasn't supposed to be distributed outside DoD for copyright reasons, but it wasn't hard to get.

[1] https://www.cia.gov/resources/csi/static/d6fd3fa9ce19f1abf2b...

[2] https://apnews.com/article/us-intelligence-services-ai-model...


jruohonen 20 hoursReload
"""

• Instead of forming hypotheses, users asked the AI for ideas.

• Instead of validating sources, they assumed the AI had already done so.

• Instead of assessing multiple perspectives, they integrated and edited the AI’s summary and moved on.

This isn’t hypothetical. This is happening now, in real-world workflows.

"""

Amen, and OSINT is hardly unique in this respect.

And implicitly related, philosophically:

https://news.ycombinator.com/item?id=43561654


palmotea 16 hoursReload
One way to achieve superhuman intelligence in AI is to make humans dumber.

0hijinks 11 hoursReload
It sure seems like the use of GenAI in these scenarios is a detriment rather than a useful tool if, in the end, the operator must interrogate it to a fine enough level of detail that she is satisfied. In the author's Scenario 1:

> You upload a protest photo into a tool like Gemini and ask, “Where was this taken?”

> It spits out a convincing response: “Paris, near Place de la République.” ...

> But a trained eye would notice the signage is Belgian. The license plates are off.

> The architecture doesn’t match. You trusted the AI and missed the location by a country.

Okay. So let's say we proceed with the recommendation in the article and interrogate the GenAI tool. "You said the photo was taken in Paris near Place de la République. What clues did you use to decide this?" Say the AI replies, "The signage in the photo appears to be in French. The license plates are of European origin, and the surrounding architecture matches images captured around Place de la République."

How do I know any better? Well, I should probably crosscheck the signage with translation tools. Ah, it's French but some words are Dutch. Okay, so it could be somewhere else in Paris. Let's look into the license plate patterns...

At what point is it just better to do the whole thing yourself? Happy to be proven wrong here, but this same issue comes up time and time again with GenAI involved in discovery/research tasks.

EDIT: Maybe walk through the manual crosschecks hand-in-hand? "I see some of the signage is in Dutch, such as the road marking in the center left of the image. Are you sure this image is near Place de la République?" I have yet to see this play out in an interactive session. Maybe there's a recorded one out there...