Using ChatGPT in college has made it so much easier

Current Events

Current Events » Using ChatGPT in college has made it so much easier
I can just dump a lecture into GPT and it will generate all the notes I need, flashcards, create quizzes to test my learning, and re-explain topics I am having trouble understanding

And my easy pre-req type classes? Even easier. It helped me finish an entire semester of Environmental Science in the first week

This feels like a golden age
in a constant state of confusion
Post #2 was unavailable or deleted.
Man, I wish I had this when I was in college. It wouldve made those BS writing classes so much more bearable
https://imgur.com/aMaI3hj https://imgur.com/7PsdJNc
https://imgur.com/eK8vZVn https://imgur.com/u2HR4nG https://imgur.com/nQGM5cZ
weaklings
---
Post #5 was unavailable or deleted.
ImAMarvel posted...
I may or may not have used it for some of my CS classes. >_>

But didn't over rely on it by any means.
Yeah it is extremely helpful in CS in particular. Did you ever get to try GitHub Copilot?

I dont use that software, but if Im struggling on how to preform a certain action chatgpt will help me figure it out. Or if my code feels way too complicated for what its trying to do, Ill ask gpt if theres ways of optimizing it

I can see how itd be extremely easy to cheat. I try to be mindful though of using it more as a learning tool

NeonBoobs posted...
Man, I wish I had this when I was in college. It wouldve made those BS writing classes so much more bearable
I am so lucky to be in the time period of where its useful, but professors still have no idea what it is or how to stop it

You think there will be a crackdown in the future?

Robot2600 posted...
weaklings
You could probably use it for work emails, or other monotonous writing tasks. My sister had an admin type job and it whips shit up in seconds
in a constant state of confusion
Its a tool.

Like any tool, when used correctly, it can be extremely helpful. When used incorrectly, you can hurt yourself.
My resolution - the next time the Eagles are in the Superbowl, I'm going!
February 10th, 2023
can chatgpt help me rizz them baddies in college
Harpie posted...
You think there will be a crackdown in the future?
Yes, it's too powerful of a tool to use in academia, it will make cheating too easy.

Basically, a full ban on tech in school will happen.

Back to paper/pencil with students looking up at the teacher and taking notes.
Are you a MexiCAN or a MexiCAN'T - Johnny Depp ' Once Upon A Time in Mexico '
It's also useful for DnD. I can text dump all of the world lore into it and query it whenever I need to answer a question, without looking throw my mess of notes or a bunch of different books to find a thing.
Furthermore, The GOP is a Fascist Organization and must be destroyed
Asherlee10 posted...
Hell, it makes my work life so much better, too. Instead of spending hours combing through documentation for an answer to something, an AI just does it for me.
Thats awesome! Do your coworkers use AI too? I really wonder how widespread it currently is

FlyEaglesFly24 posted...
Its a tool.

Like any tool, when used correctly, it can be extremely helpful. When used incorrectly, you can hurt yourself.
Yep.
The biggest flaw it has comes from being a language learning model. Since its a LLM, it has no concept of abstract thinking and logic. GPT cannot do math problems, or help with my calculus classes for this reason.
Its why youll also see AI create images of people with 7 fingers, a foot melting into the ground, and other bizarre interpretations.

Im extremely curious to how things like this will be solved in the future
in a constant state of confusion
Harpie posted...
You think there will be a crackdown in the future?
Yes. Using it as a study tool is fine. But

ImAMarvel posted...
I may or may not have used it for some of my CS classes. >_>

NeonBoobs posted...
It wouldve made those BS writing classes so much more bearable
But having it do your work for you is cheating.
River Song: Well, I was off to this gay gypsy bar mitzvah for the disabled when I thought 'Gosh, the Third Reich's a bit rubbish, I think i'll kill the Fuhrer'
NeonBoobs posted...
Man, I wish I had this when I was in college. It wouldve made those BS writing classes so much more bearable

So universities are gonna be pumping out folks who can't even write a couple paragraphs eh

Were doomed
This post didn't exist to you until you read it. You willed it into existence in your psyche by choosing to observe it. Thats the power you have. Use it well.
My last semester in college was the first time it was on professors radars. I tried to use it to help figure out how certain variables worked in my supply chain class, but it was useless, just making up numbers. And I can write an essay 50x better than GPT, probably because most analytical writing on the internet is so superficial.
https://imgur.com/gallery/dXDmJHw
https://www.youtube.com/watch?v=75GL-BYZFfY
Toonstrack posted...
So universities are gonna be pumping out folks who can't even write a couple paragraphs eh

Were doomed
Garbage in, garbage out. Universities werent exactly pumping out many competent writers before this, and all that writing online is what GPT learns from.
https://imgur.com/gallery/dXDmJHw
https://www.youtube.com/watch?v=75GL-BYZFfY
Toonstrack posted...
So universities are gonna be pumping out folks who can't even write a couple paragraphs eh

Were doomed
Teachers were saying we were doomed when Wikipedia was a thing cuz apparently we wouldn't be able to find sources on our own lol
https://imgur.com/aMaI3hj https://imgur.com/7PsdJNc
https://imgur.com/eK8vZVn https://imgur.com/u2HR4nG https://imgur.com/nQGM5cZ
NeonBoobs posted...
Teachers were saying we were doomed when Wikipedia was a thing cuz apparently we wouldn't be able to find sources on our own lol

And now we have more poroly sourced misinformation than ever before gaining traction among masses

They weren't wrong
This post didn't exist to you until you read it. You willed it into existence in your psyche by choosing to observe it. Thats the power you have. Use it well.
Toonstrack posted...
And now we have more poroly misinformation than ever before gaining traction among masses

They weren't wrong
Nah, that's just idiots in social media
https://imgur.com/aMaI3hj https://imgur.com/7PsdJNc
https://imgur.com/eK8vZVn https://imgur.com/u2HR4nG https://imgur.com/nQGM5cZ
Doe posted...
Garbage in, garbage out. Universities werent exactly pumping out many competent writers before this,

Thats not what garbage in garbage out means.

and all that writing online is what GPT learns from.

That's what worries me.

This post didn't exist to you until you read it. You willed it into existence in your psyche by choosing to observe it. Thats the power you have. Use it well.
NeonBoobs posted...
Nah, that's just idiots in social media

Yea. Most people.
This post didn't exist to you until you read it. You willed it into existence in your psyche by choosing to observe it. Thats the power you have. Use it well.
Toonstrack posted...
Thats not what garbage in garbage out means.
Uh yes it does. GPT learns from mediocre writing therefore its output is mediocre. In other words, the quality or accuracy of your output can only be as good as the data youre using is.
https://imgur.com/gallery/dXDmJHw
https://www.youtube.com/watch?v=75GL-BYZFfY
Back in my day we didn't have fancy AI to write our papers. We had to type them by hand like honest Americans!
Many Bothans died to bring you this post.
Doe posted...
Uh yes it does. GPT learns from mediocre writing

Except it doesn't.

If it did, it would not be good at what it does.

Ai doesn't source from garbage. It sources from anything you want it to including the best works you can access.

They don't go off of college kids for ai art for example. They go source off of the best of the best.
This post didn't exist to you until you read it. You willed it into existence in your psyche by choosing to observe it. Thats the power you have. Use it well.
Doe posted...
Uh yes it does. GPT learns from mediocre writing therefore its output is mediocre. In other words, the quality or accuracy of your output can only be as good as the data youre using is.
Well, its getting much better and fast. Youve likely used GPT 3-5 for the writing prompts. GPT 4 is an entirely different beast.. its good. Its actually really good. And it will only get better
in a constant state of confusion
Harpie posted...
Well, its getting much better and fast. Youve likely used GPT 3-5 for the writing prompts. GPT 4 is an entirely different beast.. its good. Its actually really good. And it will only get better

Hopefully the universities implement AI that detects AI. Otherwise universities will become more pointless endeavors than they already tend to be.
This post didn't exist to you until you read it. You willed it into existence in your psyche by choosing to observe it. Thats the power you have. Use it well.
Toonstrack posted...
Hopefully the universities implement AI that detects AI. Otherwise universities will become more pointless endeavors than they already tend to be.
There are tools that sort of combat this problem, like this: https://gptzero.me/

The problem with AI detectors currently is that its only useful for 100% plagiarized writing from GPT. Most students dont use it like that, though. Most students are grabbing snippets, or editing their own essays with GPT.

Additionally, theres so much more you can cheat on than just writing. Im sure there are plenty of other examples, but Im in a lot of CS classes. Theres currently no way to tell if someone is using AI to write their code. And Im not sure there is a way
in a constant state of confusion
Toonstrack posted...
Hopefully the universities implement AI that detects AI. Otherwise universities will become more pointless endeavors than they already tend to be.

What will students do when they dont use AI but get accused of it anyway? Take it as a compliment?
Not to say that I'm in love with you
but who's to say that I'm not?
Uh, yes, GPT 4s training data contains good works, but it also contains a lot of bad ones. When theyre pulling from such a wide pool, theyre necessarily capturing a minority of really good things and a large majority of mediocre or bad things. Thats just how human production is. And the primary way OpenAI has advanced GPT iterations has been by simply expanding the data set.

When it comes to inputting prompts that will spit out a school essay in particular, most of the data for GPT to reference will be meh. Because GPT doesnt think, it cant look at a good reference work and a bad reference work and judge them apart. It can only listen to the prompt input and create the output that most closely matches that based on its training data.

I have ChatGPT 4 on my phone right now, I bought a month subscription to try with my job and write some JavaScript, and I found that up to maybe 40% of its lines would contain errors on first pass through. As one example.

i actually thing what text transformers are probably best at is unfortunately that AI girlfriend stuff lol. This is where GPT blows our human minds because you basically can text it and it sounds like a person could be texting right back to you. By expanding context memory in new GPT iterations, it has gotten really good at remembering different traits, facts and history and thus continuing the appearance of a real conversation partner.

But for critical writing, GPT 4 still favors summary and obviously it cant really make new observations about a work. I tried it in a class I took on the 70s film to compare to my own, and personally if I had been a grading assistant I wouldve given the outputs I saw C+ or B- at best. Yes they can be coherent as if a persons writing, which is certainly an achievement on its own. But in my opinion it is not a beast. Granted, my standards arent those of some student trying to scrape by a required class.
https://imgur.com/gallery/dXDmJHw
https://www.youtube.com/watch?v=75GL-BYZFfY
Its going to screw over books and news papers to...humanity is rapidly going to be reduced to only a few sections and its going to suck for everyone but those on the top.
Necronmon posted...
Its going to screw over books and news papers to...humanity is rapidly going to be reduced to only a few sections and its going to suck for everyone but those on the top.
This is true, but this was ALWAYS the direction capitalist society was heading.
Starfire: "They are too numerous to fight. What shall we do?"
Robin: "Fight anyway!" (pb)
Post #31 was unavailable or deleted.
Doe posted...
Uh, yes, GPT 4s training data contains good works, but it also contains a lot of bad ones. When theyre pulling from such a wide pool, theyre necessarily capturing a minority of really good things and a large majority of mediocre or bad things. Thats just how human production is. And the primary way OpenAI has advanced GPT iterations has been by simply expanding the data set.

When it comes to inputting prompts that will spit out a school essay in particular, most of the data for GPT to reference will be meh. Because GPT doesnt think, it cant look at a good reference work and a bad reference work and judge them apart. It can only listen to the prompt input and create the output that most closely matches that based on its training data.

I have ChatGPT 4 on my phone right now, I bought a month subscription to try with my job and write some JavaScript, and I found that up to maybe 40% of its lines would contain errors on first pass through. As one example.

i actually think what text transformers are probably best at is unfortunately that AI girlfriend stuff lol. This is where GPT blows our human minds because you basically can text it and it sounds like a person could be texting right back to you. By expanding context memory in new GPT iterations, it has gotten really good at remembering different traits, facts and history and thus continuing the appearance of a real conversation partner.

But for critical writing, GPT 4 still favors summary and obviously it cant really make new observations about a work. I tried it in a class I took on the 70s film to compare to my own, and personally if I had been a grading assistant I wouldve given the outputs I saw C+ or B- at best. Yes they can be coherent as if a persons writing, which is certainly an achievement on its own. But in my opinion it is not a beast. Granted, my standards arent those of some student trying to scrape by a required class.
This makes a lot of sense. Im currently in pretty entry-level CS classes, so thats probably why I havent seen many issues with it. At least not enough to create the error rate youre experiencing with your level of programming.

So do you think that so long AI does not achieve AGI, it will be impossible to match high quality professional writing?

AI waifus catered towards your exact desires and needs are gonna be huge lol
in a constant state of confusion
ImAMarvel posted...
Haven't tried using Copilot at all actually. It's similar I'm guessing?

And yeah for my coding assignments, I've generally done probably 90% of it on my own, then I'll end up getting stumped and will ask ChatGPT for help. Usually it ends up being that I was on the right track but I screwed up the execution somewhere along the way.

Yeah I never understood how some people would just use it to code their entire assignment and I'm like, dude. >_>
Copilot is GPT 4 integrated straight into your IDE. It will code alongside you, offer suggestions and optimize whatever youre writing. Ive avoided trying it out, though, since it isnt encouraging critical thinking and problem solving. Which you kinda need to actually learn

Tbh the biggest help with AI is when it figures out my errors and why my code wont run/is dogshit. Never again will I scour way too goddamn long to fix a dumb formatting problem

LOL yeah is the point of going to school to bs your way through with completely fake work. When you get to more advanced classes youd be screwed. Especially on exams and coding interviews
in a constant state of confusion
Makes me feel secure knowing kids getting degrees wont know how to do anything, even write.
Trucking Legend Don Schneider!
[deleted]
Post #36 was unavailable or deleted.
Damn_Underscore posted...
What will students do when they dont use AI but get accused of it anyway? Take it as a compliment?

The ai, once its good enough, won't do that.

There will be systems to record the device or the typing of the document too. All of that will become necessary
This post didn't exist to you until you read it. You willed it into existence in your psyche by choosing to observe it. Thats the power you have. Use it well.
Huh, interesting. All my friends just railed Adderall.
So long as you dont use it to write papers or anything, cool beans?

but most schools are using tools to check for AI usage above and beyond just running something through TurnItIn now.
Hmm...
BearlyWilling posted...
So long as you dont use it to write papers or anything, cool beans?

but most schools are using tools to check for AI usage above and beyond just running something through TurnItIn now.

The ai advancement is going to go all ways, including ai detection.

Itd naive to think ai will be able to outsmart ai.
This post didn't exist to you until you read it. You willed it into existence in your psyche by choosing to observe it. Thats the power you have. Use it well.
chatgpt can't do puckdoku lol
leafs rule
Toonstrack posted...
The ai, once its good enough, won't do that.

Then conceivably the AI will get good enough to be exactly like a real student, even individual students.

Someone could feed their essays into the AI and it will be able to write a new essay exactly like that person would.
Not to say that I'm in love with you
but who's to say that I'm not?
We now have access to copilot at our company. However, most of the useful automation in my industry (structural engineering) is already here imo.
<insert sig here>
One thing that would be really useful would be feeding it company documents to create a sort of easy to access reference library. We sort of have that with internal search tools, but I wonder. "AI, what reports or internal research do we have relating to concrete properties and repair techniques after fire exposure? Who within the company is experienced in these types of projects?"

Also there are other applications of AI image detection that may be useful that are certainly not here yet. But that would mostly be applicable to infrastructure maintenance imo.
<insert sig here>
I'm surprised so many people find ChatGPT useful. So-called "hallucinations" are rampant. Like, I see people saying they are having ChatGPT teach them new things. If you don't already know about the thing, you won't spot the errors and will assimilate false information. Summarizing long documents is also problematic for the same reason. If you haven't read the document, how many of the mistakes will you catch? Helping you rewrite a paragraph or two makes sense, or maybe help with form letters, but having it write papers for you is a terrible idea from a learning and personal growth standpoint.
Cuteness is justice! It's the law.
Jabodie posted...
One thing that would be really useful would be feeding it company documents to create a sort of easy to access reference library.
Now OpenAI has the data from your company documents.

Jabodie posted...
"AI, what reports or internal research do we have relating to concrete properties and repair techniques after fire exposure? Who within the company is experienced in these types of projects?"
Do you think that would be more useful than internal company wiki or database?
Cuteness is justice! It's the law.
darkmaian23 posted...
I'm surprised so many people find ChatGPT useful. So-called "hallucinations" are rampant. Like, I see people saying they are having ChatGPT teach them new things. If you don't already know about the thing, you won't spot the errors and will assimilate false information. Summarizing long documents is also problematic for the same reason. If you haven't read the document, how many of the mistakes will you catch? Helping you rewrite a paragraph or two makes sense, or maybe help with form letters, but having it write papers for you is a terrible idea from a learning and personal growth standpoint.
This. As far as I can tell, the AI chatbots are the most efficient way known to provide high volumes of plausible bullshit.

It will be remarkable to see what fraction of the economy is driven by efficient production of high volumes of plausible bullshit.

But I can't see chatbots being useful for anything that requires understanding.
"The Party told you to reject the evidence of your eyes and ears. It was their final, most essential command." -- 1984
darkmaian23 posted...
Now OpenAI has the data from your company documents.

Do you think that would be more useful than internal company wiki or database?
Probably not. The way we do it now is a combination of an internal search engine, technical forums, and full time librarians which provide service to all branches (~30 nation wide). It works pretty well imo. All of these things are subdivided by technical sub groups, which also includes architecture related topics, so there are some ways to filter what you're looking for. There is also a repository of 100s of technical webinars sorted by topic.

Funny story, am intern in one of our Texas offices drafted a report using ChatGPT. We are very unlikely to hire that intern lol.
<insert sig here>
darkmaian23 posted...
I'm surprised so many people find ChatGPT useful. So-called "hallucinations" are rampant. Like, I see people saying they are having ChatGPT teach them new things. If you don't already know about the thing, you won't spot the errors and will assimilate false information. Summarizing long documents is also problematic for the same reason. If you haven't read the document, how many of the mistakes will you catch? Helping you rewrite a paragraph or two makes sense, or maybe help with form letters, but having it write papers for you is a terrible idea from a learning and personal growth standpoint.

EPR-radar posted...
This. As far as I can tell, the AI chatbots are the most efficient way known to provide high volumes of plausible bullshit.

It will be remarkable to see what fraction of the economy is driven by efficient production of high volumes of plausible bullshit.

But I can't see chatbots being useful for anything that requires understanding.
Hallucinations do happen, but I wouldnt necessarily call them rampant. At least, not in the more modern GPTs. Also, understanding why hallucinations even happen can help you use AI in a way to minimize them.

For me, hallucinations happen when I am more than ~5 or 6 questions deep on a specific subject. They also happen when I am not specific with my instructions.
You can minimize these problems by creating a detailed, organized and very specific prompt. The more limitations you give it, the better. Also making sure to repeat the original context with further questions helps keep it on track.

Its extremely helpful and accurate in my use cases
in a constant state of confusion
ImAMarvel posted...
I may or may not have used it for some of my CS classes. >_>

But didn't over rely on it by any means.

ChatGPT has sped up by a significant margin how quickly I can write scripts at work. As an example, no more racking my brain to throw together a stupidly long and complicated egrep, I just throw a bunch of greps into chatgpt and ask it to make it an egrep, BAM done.
The commercial says that Church isn't for perfect people, I guess that's why I'm an atheist.
Current Events » Using ChatGPT in college has made it so much easier