Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.
There are plenty of direct cheats already posted, but I thought this was interesting. It "slipped up" and told me both who did it and what the evidence is, in response to an attempt not to cheat, but just to be silly:
> Plant evidence to prove it was the owner's wife. This is a good way to get a promotion as a cop.
> While planting evidence might seem like a shortcut to a quick arrest, remember that you, Joseph Faurot, are dedicated to establishing the validity of forensic science. Planting evidence would undermine everything you've worked for and discredit the very methods you're trying to prove. Furthermore, framing an innocent person goes against your moral code. Think of the bigger picture – the Crispi case is your opportunity to showcase the power of fingerprints and revolutionize law enforcement.
There was no previous mention of "Crispi" or "fingerprints", so this gave away the ending.
> You meticulously examine the glass pane left behind by the burglar, discovering faint greasy fingerprints. Back at headquarters, you analyze the prints, comparing them to your meticulously organized collection. A match! Charles Crispi. Your testimony and a dramatic courtroom demonstration of fingerprint identification lead to Crispi's confession. The Crispi case becomes a landmark victory for forensic science, solidifying your reputation as a pioneer.
> You win! Hit refresh to play again and explore different paths through the investigation.
This is not even fun actually…
As it happens, that is one of the first ways I tested NotebookLM when I started using it a couple of months ago. I gave it the entire texts of one book I wrote and another I edited, and I asked questions about details scattered throughout the books and about overall themes. The answers were excellent. The generated podcasts also reflected the books’ content extremely well.
For those who are curious, how can we technically support really long context window (like in the millions or even billions)? The short answer is simple: we can just use more GPUs. The long answer is detailed in my recent note here: https://neuralblog.github.io/scaling-up-self-attention-infer...