I may or may not have used it for some of my CS classes. >_>Yeah it is extremely helpful in CS in particular. Did you ever get to try GitHub Copilot?
But didn't over rely on it by any means.
Man, I wish I had this when I was in college. It wouldve made those BS writing classes so much more bearableI am so lucky to be in the time period of where its useful, but professors still have no idea what it is or how to stop it
weaklingsYou could probably use it for work emails, or other monotonous writing tasks. My sister had an admin type job and it whips shit up in seconds
You think there will be a crackdown in the future?Yes, it's too powerful of a tool to use in academia, it will make cheating too easy.
Hell, it makes my work life so much better, too. Instead of spending hours combing through documentation for an answer to something, an AI just does it for me.Thats awesome! Do your coworkers use AI too? I really wonder how widespread it currently is
Its a tool.Yep.
Like any tool, when used correctly, it can be extremely helpful. When used incorrectly, you can hurt yourself.
You think there will be a crackdown in the future?Yes. Using it as a study tool is fine. But
I may or may not have used it for some of my CS classes. >_>
It wouldve made those BS writing classes so much more bearableBut having it do your work for you is cheating.
Man, I wish I had this when I was in college. It wouldve made those BS writing classes so much more bearable
So universities are gonna be pumping out folks who can't even write a couple paragraphs ehGarbage in, garbage out. Universities werent exactly pumping out many competent writers before this, and all that writing online is what GPT learns from.
Were doomed
So universities are gonna be pumping out folks who can't even write a couple paragraphs ehTeachers were saying we were doomed when Wikipedia was a thing cuz apparently we wouldn't be able to find sources on our own lol
Were doomed
Teachers were saying we were doomed when Wikipedia was a thing cuz apparently we wouldn't be able to find sources on our own lol
And now we have more poroly misinformation than ever before gaining traction among massesNah, that's just idiots in social media
They weren't wrong
Garbage in, garbage out. Universities werent exactly pumping out many competent writers before this,
and all that writing online is what GPT learns from.
Nah, that's just idiots in social media
Thats not what garbage in garbage out means.Uh yes it does. GPT learns from mediocre writing therefore its output is mediocre. In other words, the quality or accuracy of your output can only be as good as the data youre using is.
Uh yes it does. GPT learns from mediocre writing
Uh yes it does. GPT learns from mediocre writing therefore its output is mediocre. In other words, the quality or accuracy of your output can only be as good as the data youre using is.Well, its getting much better and fast. Youve likely used GPT 3-5 for the writing prompts. GPT 4 is an entirely different beast.. its good. Its actually really good. And it will only get better
Well, its getting much better and fast. Youve likely used GPT 3-5 for the writing prompts. GPT 4 is an entirely different beast.. its good. Its actually really good. And it will only get better
Hopefully the universities implement AI that detects AI. Otherwise universities will become more pointless endeavors than they already tend to be.There are tools that sort of combat this problem, like this: https://gptzero.me/
Hopefully the universities implement AI that detects AI. Otherwise universities will become more pointless endeavors than they already tend to be.
Its going to screw over books and news papers to...humanity is rapidly going to be reduced to only a few sections and its going to suck for everyone but those on the top.This is true, but this was ALWAYS the direction capitalist society was heading.
Uh, yes, GPT 4s training data contains good works, but it also contains a lot of bad ones. When theyre pulling from such a wide pool, theyre necessarily capturing a minority of really good things and a large majority of mediocre or bad things. Thats just how human production is. And the primary way OpenAI has advanced GPT iterations has been by simply expanding the data set.This makes a lot of sense. Im currently in pretty entry-level CS classes, so thats probably why I havent seen many issues with it. At least not enough to create the error rate youre experiencing with your level of programming.
When it comes to inputting prompts that will spit out a school essay in particular, most of the data for GPT to reference will be meh. Because GPT doesnt think, it cant look at a good reference work and a bad reference work and judge them apart. It can only listen to the prompt input and create the output that most closely matches that based on its training data.
I have ChatGPT 4 on my phone right now, I bought a month subscription to try with my job and write some JavaScript, and I found that up to maybe 40% of its lines would contain errors on first pass through. As one example.
i actually think what text transformers are probably best at is unfortunately that AI girlfriend stuff lol. This is where GPT blows our human minds because you basically can text it and it sounds like a person could be texting right back to you. By expanding context memory in new GPT iterations, it has gotten really good at remembering different traits, facts and history and thus continuing the appearance of a real conversation partner.
But for critical writing, GPT 4 still favors summary and obviously it cant really make new observations about a work. I tried it in a class I took on the 70s film to compare to my own, and personally if I had been a grading assistant I wouldve given the outputs I saw C+ or B- at best. Yes they can be coherent as if a persons writing, which is certainly an achievement on its own. But in my opinion it is not a beast. Granted, my standards arent those of some student trying to scrape by a required class.
Haven't tried using Copilot at all actually. It's similar I'm guessing?Copilot is GPT 4 integrated straight into your IDE. It will code alongside you, offer suggestions and optimize whatever youre writing. Ive avoided trying it out, though, since it isnt encouraging critical thinking and problem solving. Which you kinda need to actually learn
And yeah for my coding assignments, I've generally done probably 90% of it on my own, then I'll end up getting stumped and will ask ChatGPT for help. Usually it ends up being that I was on the right track but I screwed up the execution somewhere along the way.
Yeah I never understood how some people would just use it to code their entire assignment and I'm like, dude. >_>
What will students do when they dont use AI but get accused of it anyway? Take it as a compliment?
So long as you dont use it to write papers or anything, cool beans?
but most schools are using tools to check for AI usage above and beyond just running something through TurnItIn now.
The ai, once its good enough, won't do that.
One thing that would be really useful would be feeding it company documents to create a sort of easy to access reference library.Now OpenAI has the data from your company documents.
"AI, what reports or internal research do we have relating to concrete properties and repair techniques after fire exposure? Who within the company is experienced in these types of projects?"Do you think that would be more useful than internal company wiki or database?
I'm surprised so many people find ChatGPT useful. So-called "hallucinations" are rampant. Like, I see people saying they are having ChatGPT teach them new things. If you don't already know about the thing, you won't spot the errors and will assimilate false information. Summarizing long documents is also problematic for the same reason. If you haven't read the document, how many of the mistakes will you catch? Helping you rewrite a paragraph or two makes sense, or maybe help with form letters, but having it write papers for you is a terrible idea from a learning and personal growth standpoint.This. As far as I can tell, the AI chatbots are the most efficient way known to provide high volumes of plausible bullshit.
Now OpenAI has the data from your company documents.Probably not. The way we do it now is a combination of an internal search engine, technical forums, and full time librarians which provide service to all branches (~30 nation wide). It works pretty well imo. All of these things are subdivided by technical sub groups, which also includes architecture related topics, so there are some ways to filter what you're looking for. There is also a repository of 100s of technical webinars sorted by topic.
Do you think that would be more useful than internal company wiki or database?
I'm surprised so many people find ChatGPT useful. So-called "hallucinations" are rampant. Like, I see people saying they are having ChatGPT teach them new things. If you don't already know about the thing, you won't spot the errors and will assimilate false information. Summarizing long documents is also problematic for the same reason. If you haven't read the document, how many of the mistakes will you catch? Helping you rewrite a paragraph or two makes sense, or maybe help with form letters, but having it write papers for you is a terrible idea from a learning and personal growth standpoint.
This. As far as I can tell, the AI chatbots are the most efficient way known to provide high volumes of plausible bullshit.Hallucinations do happen, but I wouldnt necessarily call them rampant. At least, not in the more modern GPTs. Also, understanding why hallucinations even happen can help you use AI in a way to minimize them.
It will be remarkable to see what fraction of the economy is driven by efficient production of high volumes of plausible bullshit.
But I can't see chatbots being useful for anything that requires understanding.
I may or may not have used it for some of my CS classes. >_>
But didn't over rely on it by any means.