Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.

Source:https://github.com/SoraKumo001/next-streaming

⬅️ How University Students Use Claude
dtnewman 9 daysReload
> A common question is: “how much are students using AI to cheat?” That’s hard to answer, especially as we don’t know the specific educational context where each of Claude’s responses is being used.

I built a popular product that helps teachers with this problem.

Yes, it's "hard to answer", but let's be honest... it's a very very widespread problem. I've talked to hundreds of teachers about this and it's a ubiquitous issue. For many students, it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".

I think the issue is that it's so tempting to lean on AI. I remember long nights struggling to implement complex data structures in CS classes. I'd work on something for an hour before I'd have an epiphany and figure out what was wrong. But that struggling was ultimately necessary to really learn the concepts. With AI, I can simply copy/paste my code and say "hey, what's wrong with this code?" and it'll often spot it (nevermind the fact that I can just ask ChatGPT "create a b-tree in C" and it'll do it). That's amazing in a sense, but also hurts the learning process.


defgeneric 9 daysReload
After reading the whole article I still came away with the suspicion that this is a PR piece that is designed to head-off strict controls on LLM usage in education. There is a fundamental problem here beyond cheating (which is mentioned, to their credit, albeit little discussed). Some academic topics are only learned through sustained, even painful, sessions where attention has to be fully devoted, where the feeling of being "stuck" has to be endured, and where the brain is given space and time to do the real work of synthesizing, abstracting, and learning, or, in short, thinking. The prompt-chains where students are asking "show your work" and "explain" can be interpreted as the kind of back-and-forth that you'd hear between a student and a teacher, but they could also just be evidence of higher forms of "cheating". If students are not really working through the exercises at the end of each chapter, but instead offloading the task to an LLM, then we're going to have a serious competency issue. Nobody ever actually learns anything.

Even in self-study, where the solutions are at the back of the text, we've probably all had the temptation to give up and just flip to the answer. Anthropic would be more responsible to admit that the solution manual to every text ever made is now instantly and freely available. This has to fundamentally change pedagogy. No discipline is safe, not even those like music where you might think the end performance is the main thing (imagine a promising, even great, performer who cheats themselves in the education process by offloading any difficult work in their music theory class to an AI, coming away learning essentially nothing).

P.S. There is also the issue of grading on a curve in the current "interim" period where this is all new. Assume a lazy professor, or one refusing to adopt any new kind of teaching/grading method: the "honest" students have no incentive to do it the hard way when half the class is going to cheat.


SamBam 9 daysReload
I feel like Anthropic has an incentive to minimize how much students use LLMs to write their papers for them.

In the article, I guess this would be buried in

> Students also frequently used Claude to provide technical explanations or solutions for academic assignments (33.5%)—working with AI to debug and fix errors in coding assignments, implement programming algorithms and data structures, and explain or solve mathematical problems.

"Write my essay" would be considered a "solution for academic assignment," but by only referring to it obliquely in that paragraph they don't really tell us the prevalence of it.

(I also wonder if students are smart, and may keep outright usage of LLMs to complete assignments on a separate, non-university account, not trusting that Anthropic will keep their conversations private from the university if asked.)


walleeee 9 daysReload
> Students primarily use AI systems for creating (using information to learn something new)

this is a smooth way to not say "cheat" in the first paragraph and to reframe creativity in a way that reflects positively on llm use. in fairness they then say

> This raises questions about ensuring students don’t offload critical cognitive tasks to AI systems.

and later they report

> nearly half (~47%) of student-AI conversations were Direct—that is, seeking answers or content with minimal engagement. Whereas many of these serve legitimate learning purposes (like asking conceptual questions or generating study guides), we did find concerning Direct conversation examples including: - Provide answers to machine learning multiple-choice questions - Provide direct answers to English language test questions - Rewrite marketing and business texts to avoid plagiarism detection

kudos for addressing this head on. the problem here, and the reason these are not likely to be democratizing but rather wedge technologies, is not that they make grading harder or violate principles of higher education but that they can disable people who might otherwise learn something


zebomon 9 daysReload
The writing is irrelevant. Who cares if students don't learn how to do it? Or if the magazines are all mostly generated a decade from now? All of that labor spent on writing wasn't really making economic sense.

The problem with that take is this: it was never about the act of writing. What we lose, if we cut humans out of the equation, is writing as a proxy for what actually matters, which is thinking.

You'll soon notice the downsides of not-thinking (at scale!) if you have a generation of students who weren't taught to exercise their thinking by writing.

I hope that more people come around to this way of seeing things. It seems like a problem that will be much easier to mitigate than to fix after the fact.

A little self-promo: I'm building a tool to help students and writers create proof that they have written something the good ol fashioned way. Check it out at https://itypedmypaper.com and let me know what you think!