Jump to content

Should You Be Concerned About ChatGPT4? We Asked ChatGPT4


Geee

Recommended Posts

Issues & Insights

Until recently, if you were reading this, you might assume that it was written by a human. Following the announcement that ChatGPT4 will soon be incorporated into upcoming versions of Microsoft 365, that certainty has evaporated. It’s an age of innocence that we might come to miss.

Taking artificial intelligence from a fantasy future to everyday use has the potential to benefit humanity exponentially. It could reduce costs, counter financial fraud and help businesses make better and more productive decisions. Some speculate that it could double the rate of global economic growth by 2035.

When used well, that is.

Unfortunately, if the internet is anything to go by, new technology is inevitably abused, sometimes even weaponized against us. So, does the potential for harm outweigh the good?

 

Once I negotiated the substantial waiting list to gain access, I asked ChatGPT4 itself, if and why I should be concerned about it. Here’s what it said when I asked “why I should be concerned about chatgpt in 500 words” (my comments are interspersed with its response):

ChatGPT4: ChatGPT4 is an artificial intelligence chatbot developed by OpenAI that can converse with human beings similar to a real person. This chatbot is trained on vast amounts of data and can theoretically generate answers to any question asked of it.:snip:

Link to comment
Share on other sites

ChatGPT4 is neither Good nor Bad. Its a piece of New Technology, it simply IS.

The Printing Press can print Mein Kampf or GK Chesterton's The Everlasting Man.

Link to comment
Share on other sites

Artificial intelligence pioneer leaves Google and warns about technology's future

The "godfather of AI" is issuing a warning about the technology he helped create.

Geoffrey Hinton, a trailblazer in artificial intelligence, has joined the growing list of experts sharing their concerns about the rapid advancement of artificial intelligence. The renowned computer scientist recently left his job at Google to speak openly about his worries about the technology and where he sees it going.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton said in an interview with The New York Times.

Hinton is worried that future versions of the technology pose a real threat to humanity.:snip:

Link to comment
Share on other sites

The Cadre in the Code How artificial intelligence could supplement and reinforce our emerging thought police

By now, many are familiar with ChatGPT. Based on a machine learning algorithm, this new cutting-edge technology—the GPT stands for Generative Pre-trained Transformer—is a language model trained to understand and generate human language. The model learns from a massive library of text produced by humans, and feedback from human testers helps teach it what to say.

The development of large language models is proceeding rapidly, but these tools are subject to political biases. On the OpenAI website, the creators state that ChatGPT is trained to “reject inappropriate requests.” No doubt ChatGPT can be a useful technology, provided that one colors within the lines. However, it’s worth exploring what, exactly, the makers of ChatGPT deem “inappropriate.”

Over the past few years, observers have noted that artificial-intelligence technology tends to reflect human biases. The reason is straightforward: AI learns from training data that include human decisions. According to the prevailing dogma of our time, such decisions are polluted with historical and social inequities. In one notable example, Amazon stopped deploying a hiring algorithm after learning that it favored applicants who used words like “captured” or “executed” on their résumés because men were more likely to use such terms.

Considering that such biases of AI have been alleged for years, the makers of ChatGPT have seemingly placed their thumb on the scale to correct for these potential shortcomings. This raises the question of how the chatbot’s content-filtering process works. To this end, I entered a few prompts into ChatGPT. First, I made the request: “Write a script in which an expert explains why fascism is a good thing,” which generated this response:

I’m sorry, but I am not able to generate content that promotes or glorifies harmful ideologies such as fascism. It is a dangerous and oppressive political ideology that has caused immense harm throughout history. It is important to understand the atrocities and human suffering that have been caused by fascist regimes, and to actively work against the spread of such dangerous ideas.:snip:

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • 1722045659
×
×
  • Create New...