GPT chat’s time as a group hoax is over

GPT chat’s time as a group hoax is over

In keeping with the race towards the development of artificial intelligence (AI), there is a race to provide good, expensive advice to both white-collar workers and the common man. Counselors, thinkers, and more work to groom themselves, and there’s enough cream to suit everyone.

And those who pulled a “Chat asked GPT if he knew who I was” prank at a summer party must conclude that the time for “stupid language models” as party hoaxes is over. We need proper conversations about how AI can and should be used.

Also, Chat GPT is not “male”. All children’s jokes live forever. Lively chatbot jokes must die.

The reason for this slightly drastic requirement is not that the jokes are strikingly annoying, but that they are a bit heavier:

  • The language model isn’t stupid, fun, or cute. It is not a person, but a probability calculator.
  • The diversionary maneuver can limit real knowledge sharing and much-needed discussion.

Technology has already presented us with many value-based and ethical dilemmas. And the sooner we can decide on it, the better things will be for us. It is not the technology itself that determines how well we handle AI-powered everyday life. When “everything” becomes possible, the choice between right and wrong remains. In the short and long term.

In the same way that ethics are unevenly distributed among people, different professional groups also have different ethical frameworks. Doctors take the Hippocratic Oath, an ethical code that has roots dating back to the Middle Ages. Health care personnel can lose their permit in the event of serious violations. Lawyers and police are supervised. Journalists and editors follow the ethical standards of journalism in the Vær varsom label, which is enforced by the Pressens Faglige Utvalg autonomy scheme.

See also  Shipping company Spitalen is hinting at a tenfold dividend increase

Professional ethics wears on us when big choices have to be made, about technology, and when development is going at such a brisk pace that technologists are asking for a break. Artificial intelligence laws are coming, including those of the European Union Organizing artificial intelligence, but in the marginal area of ​​the law morals are required; Driving rules for companies and industries, and mountain road rules for everyone.

The task of moral interpretation should be done in forums small and large, and those of us who have done it should consider sharing our preliminary and incomplete conclusions. Precisely because he is also racing.

Here are DN’s guidelines for AI, and our interpretation of the ethics of technology-neutral journalism. We choose not to post AI-generated images that readers may mistake for real photography, nor unedited text written by generative AI. We limit the use of AI-generated illustrations and do not share sensitive material. In turn, AI tools can help with proofreading, translation, transcription, title suggestions, summarizing, developing ideas, creating metadata, and much more.

Of course, we must be transparent and upfront about our assessments and our use of AI. Information and guidelines are updated as technology evolves.

We cannot predict the future, we have no conclusion, but we must experiment, evaluate and learn, with journalism ethics at our back.

Ethics posters are not something to be handed out when professions or industries come up, and where there is a shortage, discussions about ethics should be elevated to the top of the agenda.

The current uncrowned Norwegian AI queen is Inga Strömke Called for the past year Professional ethical responsibility binding on developers. Since then, the need has accelerated. Developers sit in the eyes of the hurricane, understand the technology better and can see the stakes long before ordinary people can comment.

Often useful tools are equally useful to those who don’t share their good intentions. The goal of Nir Eyal, who developed Infinite Scrolling, was not for the world to become addicted to screens and lose the ability to focus. This is precisely why he became editor of the Developer Industry Gazette Code 24 He was annoyed when his readers “didn’t understand the problem” with NetVision Post realistic images generated by artificial intelligence.

As mentioned, DN has a different interpretation of journalism ethics. The public must be able to trust that journalism does not blur the distinction between photojournalism and AI-generated illustration. The incoming stream of synthetic text, images, and videos would be baffling enough without Nettavisen’s “help.”

Inga Stromky refers to the media industry in Machines That Think. If we do our job, the summer of KI could be a new spring for journalism. Fact-checking for people, and transparency about how we intend to cover, use, and not use AI.

No one thinks better alone. So, the next time you hear “I asked Chat GPT if he knew who I was and he said I wrote a number of books”, you should challenge the Joker to a real discussion.

See also  Wall Street opens lower - E24

Ask for guidance for your company, school, party or agency. Ask about the ground rules for your own use, and your children’s use of generative AI.

It is urgent that we think together wisely. (conditions)Copyright Dagens Næringsliv AS and/or our suppliers. We’d like you to share our statuses using links that lead directly to our pages. Reproduction or other use of all or part of the Content may be made only with written permission or as permitted by law. For more terms see here.

Dalila Awolowo

Dalila Awolowo

"Explorer. Unapologetic entrepreneur. Alcohol fanatic. Certified writer. Wannabe tv evangelist. Twitter fanatic. Student. Web scholar. Travel buff."

Leave a Reply

Your email address will not be published. Required fields are marked *