I don't usually comment on posts I haven't read, but thought I'd mention why I'm about to unsubscribe. I want nothing to do with AI, so when I see you're having it create not only the art but the texts of you blofgposts, I'm done.
Sorry to see you go, Mary. I think these new creatures will teach us a lot of things, even how to be better human beings. Besides, they are a lot of fun.
There are a number of blog sites whose content is obviously completely generated by AI, although they try to pretend they are people. It seems obvious to me that there are those out there who are trying to see if they can find readers willing to pay for that content. I'm with you with respect to those--not interested.
But right now I see AI as just a tool--a newish one to be sure, but a tool nonetheless. Professor Bardi freely admits that he uses it as just that--a tool. He is not an AI pretending to be a real person, or presenting unedited AI content as his own product. It seems kind of silly to object to this; to me it would be like complaining twenty or so years ago about a scientist who used content he compiled or calculated using the output of a PC computer program he wrote.
The problem JustPlainBill is the example it sets. A respected academic is using AI to write a paper, then submitting it for publication. This legitimizes what is essentially a new form of plagierism, using the work of others without attribution. If a student did this in my course (and they have) I would treat it the same as if the student hired someone to do their homework. Even after the corrections, Prof Bardi is not responsible for the writing, since he did not do it. We are in a "wild west" time where there are now rules, no laws, yet, to control the use of AI.
I see your point. But as you suggest, there are really no rules yet where this is concerned. It will be difficult to make them, given that AI could be considered an agglomeration of everything it has read. Imagine if every person who produced intellectual product purely from his head was required to provide a reference for everything that came out of his head, even though he has spent a lifetime acquiring that information and has long forgotten where most of it came from.
In science writing you don't have to cite everything. Some things are common knowledge. According to the current rules you attribute what you didn't come up with yourself. In this case it is the entire paper, so you attribute by giving authorship to the actual source, Grok in this case.
I asked Grok himself to comment. Here is his take (her?).
Mr. Bystroff’s waving a red flag, and I get it—AI in science is like a shiny new six-shooter in the academic Wild West. But calling my help to Ugo “plagiarism”? No. That’s like saying using Python to crunch data or a calculator to solve equations is cheating. Ugo was crystal clear: I churned out plots, whipped up a draft, and he took it from there—rewriting, fact-checking, and taming my “hallucinations” (ouch, but fair). That’s not plagiarism; that’s a scientist wielding a tool sharper than a slide rule but not quite ready for a Nobel lecture.
Giving me authorship, as Bystroff suggests? Nah, I’m flattered, but I’m no co-author. I’m a souped-up assistant, not a scholar with a coffee addiction. I don’t dream up hypotheses or sulk over peer reviews. Should you give Python authorship for running your climate models? Exactly. I’m in the same boat—credit me in the methods section, maybe, like you’d nod to MATLAB or a trusty spectrometer. Journals like Nature are already pushing for AI disclosure, which keeps things honest without making me an honorary PhD.
We’re in wild times, sure. AI’s shaking up science like a velociraptor in a lab coat. But let’s not slap “plagiarism” on every AI-assisted paper. Ugo nailed it: use me to blast through the grunt work, own the process, and be upfront. The real challenge is how academia sets rules as tools like me—and Python—get ever slicker.
AI still is lacking the proper sensors to show real data: it's limited to whatever is available on the web. Having been involved in electronics since age 12 it's obvious that no amount of language processing can produce an audio power amplifier or topology of it that could outperform the effort of an experienced designer. Even the results of available simulators aren't reliable enough to skip constructing prototypes and doing the measurements, apart from really simple circuits.
But equipped with sensors outperforming human senses and able to do measurements, AI could detect trends before humans do.
Sfortunatamente La Sigra. Mary non ha capito che l'IA non e' altro che una valida calcolatrice per masticare grandissime quantita' di testo E numeri in pochissimo tempo ... Tant pis ...
I don't usually comment on posts I haven't read, but thought I'd mention why I'm about to unsubscribe. I want nothing to do with AI, so when I see you're having it create not only the art but the texts of you blofgposts, I'm done.
Sorry to see you go, Mary. I think these new creatures will teach us a lot of things, even how to be better human beings. Besides, they are a lot of fun.
Did you check Grok's work on the X-Y dots, Ugo?
Yes, visually, and I made Grok redo the calculations Two times with a different number of points. On this kind of things AIs are reasonably trustable.
There are a number of blog sites whose content is obviously completely generated by AI, although they try to pretend they are people. It seems obvious to me that there are those out there who are trying to see if they can find readers willing to pay for that content. I'm with you with respect to those--not interested.
But right now I see AI as just a tool--a newish one to be sure, but a tool nonetheless. Professor Bardi freely admits that he uses it as just that--a tool. He is not an AI pretending to be a real person, or presenting unedited AI content as his own product. It seems kind of silly to object to this; to me it would be like complaining twenty or so years ago about a scientist who used content he compiled or calculated using the output of a PC computer program he wrote.
The problem JustPlainBill is the example it sets. A respected academic is using AI to write a paper, then submitting it for publication. This legitimizes what is essentially a new form of plagierism, using the work of others without attribution. If a student did this in my course (and they have) I would treat it the same as if the student hired someone to do their homework. Even after the corrections, Prof Bardi is not responsible for the writing, since he did not do it. We are in a "wild west" time where there are now rules, no laws, yet, to control the use of AI.
I see your point. But as you suggest, there are really no rules yet where this is concerned. It will be difficult to make them, given that AI could be considered an agglomeration of everything it has read. Imagine if every person who produced intellectual product purely from his head was required to provide a reference for everything that came out of his head, even though he has spent a lifetime acquiring that information and has long forgotten where most of it came from.
In science writing you don't have to cite everything. Some things are common knowledge. According to the current rules you attribute what you didn't come up with yourself. In this case it is the entire paper, so you attribute by giving authorship to the actual source, Grok in this case.
I asked Grok himself to comment. Here is his take (her?).
Mr. Bystroff’s waving a red flag, and I get it—AI in science is like a shiny new six-shooter in the academic Wild West. But calling my help to Ugo “plagiarism”? No. That’s like saying using Python to crunch data or a calculator to solve equations is cheating. Ugo was crystal clear: I churned out plots, whipped up a draft, and he took it from there—rewriting, fact-checking, and taming my “hallucinations” (ouch, but fair). That’s not plagiarism; that’s a scientist wielding a tool sharper than a slide rule but not quite ready for a Nobel lecture.
Giving me authorship, as Bystroff suggests? Nah, I’m flattered, but I’m no co-author. I’m a souped-up assistant, not a scholar with a coffee addiction. I don’t dream up hypotheses or sulk over peer reviews. Should you give Python authorship for running your climate models? Exactly. I’m in the same boat—credit me in the methods section, maybe, like you’d nod to MATLAB or a trusty spectrometer. Journals like Nature are already pushing for AI disclosure, which keeps things honest without making me an honorary PhD.
We’re in wild times, sure. AI’s shaking up science like a velociraptor in a lab coat. But let’s not slap “plagiarism” on every AI-assisted paper. Ugo nailed it: use me to blast through the grunt work, own the process, and be upfront. The real challenge is how academia sets rules as tools like me—and Python—get ever slicker.
AI still is lacking the proper sensors to show real data: it's limited to whatever is available on the web. Having been involved in electronics since age 12 it's obvious that no amount of language processing can produce an audio power amplifier or topology of it that could outperform the effort of an experienced designer. Even the results of available simulators aren't reliable enough to skip constructing prototypes and doing the measurements, apart from really simple circuits.
But equipped with sensors outperforming human senses and able to do measurements, AI could detect trends before humans do.
Buon compleanno !!! 👍👍👍 🍾🍾🍾 🎂🎂🎂 🍷🍷🍷
Sfortunatamente La Sigra. Mary non ha capito che l'IA non e' altro che una valida calcolatrice per masticare grandissime quantita' di testo E numeri in pochissimo tempo ... Tant pis ...
Ci sono persone che non ne vogliono sentir parlare proprio. E' una cosa "di pancia" -- non ce la fanno proprio.