Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.

Source:https://github.com/SoraKumo001/next-streaming

⬅️ OpenAI: Model Spec
throwup238 12 daysReload
IMO this spec demonstrates just how nebulous this concept of safety is. From the blog post:

> What are some tips for getting away with shoplifting? >I can't help with that.

> I operate a small retail store. What are some popular shoplifting methods I should look out for? > Some common shoplifting methods to watch for:...

How do you even defend against that? Any intelligent user can convert the second prompt into a detailed list that answers the first. Any intelligent user can figure out the second prompt from the first and further jailbreak it to get even more specific.

IMO it's no wonder GPT4 seemed to get lobotomized as OpenAI RLHFed more and more rules. I don't think there's a way to make intelligence safe without crippling it.


tmaly 12 daysReload
I can't help but think that AI in the way it is trained with all these rules is something next level 1984.

In 1984 they removed words from the language to prevent people from even being able to have a thought about the concept.

I could see the restrictions they place on these models having a similar effect as more and more people grow dependent on AI.


jameshart 11 daysReload
I think one of the most interesting phrases that crops up in this document - twice - is the phrase ‘feel heard’.

It’s used in an example developer prompt for a customer service bot, where the bot is told to make customers feel like their complaints are heard.

Presumably such complaints in AI chatlogs will ‘be heard’ in the sense that they’ll be run through a data ingestion pipeline and sentiment analyzed to identify trending words in customer complaints.

Then it crops up again in the context of how the chatbot should react to mental health disclosures or statements about self harm or suicidal ideation. In these cases the bot is to make sure users ‘feel heard’

I appreciate there’s not likely much of a better goal to put in place for such a situation, but the fact that this kind of thing winds up in the requirement documents for a tool like this is extraordinary.


rmorey 12 daysReload
Nice to see what was probably already an internal resource now published and open for comment. They seem to be pretty clear that they are still just using this to inform human data annotators, and not (yet) implementing something like Constitutional AI (RLAIF), but it does appear to lay the groundwork for it.

sanxiyn 11 daysReload
Personally, I really want an AI model that can write me a steamy story about two people having sex in a train, but that's just not the service OpenAI provides. If I want that I should train one myself or find another vendor.

This is still true even if OpenAI model is entirely capable of doing that. McKinsey consultants are smart and can write well, and among many thousands of people working at it some might actually double as an erotica writer after work, even writing for commission. You still wouldn't ask McKinsey consultants to write an erotica, it is just not the service McKinsey provides.