GPT-4 shouldn’t worry anyone – we just need to get better

Text generators have been around for years, but they finally got good. Where previous iterations required extensive editing, OpenAI’s latest version of ChatGPT writes with better grammar than I do – sorry, me. But both chatbots and generative artificial intelligence (AI) tools like the newly launched GPT-4 lack that creative spark.

This is mainly due to how they work. For example, ChatGPT’s machine learning system was trained on 570 GB of text from Wikipedia, the open web, and books. By searching through those 300 billion words, the system has learned how sentences are structured, but it still doesn’t understand what it’s saying – and that means the paragraphs output will be read naturally, but may not be accurate.

While AI models can be trained with vast amounts of data and can recognize patterns and generate coherent and coherently designed text, they may not capture the deeper meaning or implications of the words they use. Also, AI is not as creative as humans. It can generate text based on patterns it sees in data, but it can’t come up with new ideas or perspectives like a human writer can. This can lead to a lack of originality and depth in writing.

No wonder teachers are concerned, as the New York City Department of Education even bans access to ChatGPT over its network and an Australian school is returning to in-person paper-and-pencil exams. School is the only place where people are judged on their writing on well-worn subjects. The answers to questions about themes in 19th-century novels and symbolism in Shakespeare are all already online, so ChatGPT can easily reproduce them. Of course, students can too.

It’s easy to buy ready-made essays, find sample texts to steal bits from, or otherwise get answers on the internet—type “Pride & Prejudice Topics” into Google and you’re done. Reading SparkNotes, Wikipedia, or other bibliography is not cheating; As every student should know, copy and paste is plagiarism, but reading, thinking about the information before paraphrasing it is the real job.

For some students, writing in their own words is the hardest part, but there’s plenty of algorithmic support beyond Google Search to help. Spell checkers are a lot smarter than they used to be, while tools like Grammarly catch not just mistakes but awkward phrasing, helping to improve your writing while you’re pounding away at your keyboard.

Teachers, too, have tools at hand to thwart AI fraud, beyond blocking websites on campus networks or forcing students to write in pre-digital style classrooms. Plagiarism checkers can spot some AI-generated text because it’s largely pulled from around the web and contains noticeable patterns and repetition — and that’s according to ChatGPT itself when I asked it for advice on finding text it wrote.

I also asked ChatGPT to write my column for me. Unfortunately, my editor wasn’t very keen on paying me to copy and paste several hundred words, but ChatGPT’s argument as to why AI isn’t a good way to write a column was actually solid, but the system lacks nuance and it cannot understand the context. Also, it’s unethical as it makes me unemployed and can’t generate new ideas.

After all, this AI doesn’t think, it vomits. That we ask students to do the same thing so many times that it can be replicated by the AI ​​is a fair criticism of our curriculum.

By introducing more randomness, AI can be used to bang ideas together to create something new. That’s what Janelle Shane does with AI Weirdness, a fabulous blog and newsletter that reveals a lot about the inner workings of AI systems while also developing paint colors like “Turdly” and recipes for a chocolate brownie with a cup of horseradish.

While often hilarious, randomly placing letters one after another to approximate a word isn’t really creativity on the part of the AI. Humans have an ability for originality that machine learning lacks. AI can only put back together what we have already done.

My favorite example comes from Mark O’Connell’s annoyingly brilliant book To Be a Machine, in which an unnamed futurist suggests to the author that much journalism can easily be replaced by AI. In retaliation, O’Connell includes in his book a description of the Futurist “retrieving a fallen pistachio from his expensive shirt—an act of petty and futile revenge, and of the kind of absurd irrelevance that would surely be beneath the dignity and professional discipline of one.” automated writing AI.”

Of course, ChatGPT can include this example if trained on O’Connell’s book, although it can’t boast of similar absurdities of its own. But just like the AI, I borrow from other people’s texts to summarize my arguments, so maybe writers like O’Connell are irreplaceable, maybe I’m not. That’s a tough conclusion regarding my life’s work, so prove me wrong: Find the paragraph in this column written by ChatGPT. There’s no price, but if you’re right, it might help quell that sinking feeling that I should retrain as a yoga teacher or park ranger or some other job less easily replaced by AI.

Featured Resources

The role of storage in addressing the challenges of ensuring cyber resiliency

Understanding the role of data storage in cyber resilience

Free download

What bank CIOs need to know when considering bank-specific cloud solutions

Giving banks a way to assess the value proposition of industry-specific clouds

Watch now

Cost of a Data Breach Report 2022

Discover the factors that help reduce the cost of data breaches

Free download

Four steps to better business decisions

Identify where data can help your business

Free download


Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *