Thread Reader
merve ❤️‍🩹

merve ❤️‍🩹
@mervenoyann

Feb 26, 2023
9 tweets
Twitter

many people who recently started learning about language models miss a lot of context on the journey that they've become so capable and robust. why @OpenAI does the right thing by applying censorship on discriminative outputs, a thread 🧵

@jason

@jason
@Jason

It’s truly insane that CHATGPT made a layer of wokeness/censorship to the product instead of just saying “this tool will regurgitate back to you summaries of what it ingested from the open web” Your thoughts?
Three years ago I've given a speech on language models, how they work, what they can or can't do and what are they bad at. Back then I read a paper on how larger scale checkpoints were more tendent to amplify biases (talk is here youtube.com/watch?app=desk…)
There were many ethics discussions (ones from @@emilymbender@dair-community.social on Mastodon @MMitchell & co) around how these models being in production with these biases could affect day to day industry applications and cause institutional racism/sexism and these were valid concerns.
I'd say biggest non-censored application of generative language models back then were from Replika, they were known to use GPT-series, and they refused to censor it. If you talked bad on a controversial topic, so did replika twitter.com/mervenoyann/st…
merve ❤️‍🩹

merve ❤️‍🩹
@mervenoyann

what happens when you use GPTs as a chatbot 🙃 cr: @Emanuele Lapponi
I used to be a machine learning engineer developing a bot for healthcare back then and due to these type of examples I would refuse to integrate GPT-series models to our own product. people with weak mental health could've been damaged from this artificialintelligence-news.com/2020/10/28/med…
There were many MLEs I knew in the industry who shared the same opinion as I did: these models carry a lot of biases, and using these meant they will interact with your end user, which it could reply in a racist/sexist manner and as an MLE you'd be responsible.
the fact that OpenAI now puts these behind a gate means it carries the responsibility in case an accident happens and as I shared above, it used to be criticized a lot so they worked on filtering a lot to make their models robust in production.
for this reason, I'd say most of the chatbots will still not have generative models embedded in them. most of the chatbot use cases are to automate a process anyway (RPA), and most of the RPA use cases require pre-determined outputs so they use BERT-like models
there's many use cases for generative models, but they should be even more robust against prompt injection and more to be directly speaking to end user. until then, for RPA apps, it's like cutting bread with a lightsaber to use LLMs. end of thread 🧵
merve ❤️‍🩹

merve ❤️‍🩹

@mervenoyann
https://t.co/6koe0eYHrE ❤️‍🩹 | open-sourceress @huggingface 🧙🏻‍♀️🤗 @GoogleDevExpert in Machine Learning 🧡 MScc in Data Science | developing skops
Follow on Twitter
Missing some tweets in this thread? Or failed to load images or videos? You can try to .