I just had to email my 16-year-old's teacher to explain that he did not use AI for an assignment. (I watched him complete it!)
I also included multiple references for why no one should be using AI detectors in education.
TurnItIn is making $$$$ by lying to schools.
@lowd next human-AI interaction assignment prompt, "it's 2035, and your teenage kid arrives home on your hoverboard to tell you that they have been accused of having AI complete their homework for them:
1) what was their homework?
2) what was the accusation?
3) what is their defense?
@lowd My immediate question is, what school thinks a 16 year old's assignment is so crucial that they need to run proprietary/commercial "AI detection" against it? What's he going to get out of it, a slightly better grade to show on his report card? Better crack down on this truly reprehensible illicit activity without hesitation! Further, why is this teacher trying to be a volunteer cop? Super lame all around.
@lowd omg, I just looked up "TurnItIn". What kind of EULA encumbers the service that this teacher is submitting your child's original work to? How is that even ethically acceptable? If my kid's work was being posted up to some US-based "authentication" service I'd be downright livid. Did they even ask for your consent to share your child's work with 3rd parties?
@lowd if they’re unhappy about how the child has produced the homework the answer is not to give them homework and instead to teach them properly in school.
Homework is illegal in France, on the basis that it is the parent that determines the quality of the homework, not the child. The French believe that everybody should receive the same education, regardless of who your parent is or what your parent does.
@lowd I am so glad I graduated back in 2018 - before this was a concern. Even then the plagiarism detection could be overzealous, but now?
Teachers have been lazy for a long long time and the admins helped Simply questioning the suspect student on their assignment to ensure they know what's in it is all that needs to be done
@lowd I'm literally sitting in a "Transitioning out of the military" class right this second, and one of the Big Nasties that I learned today is that employers are now using "AI" to conduct initial intake interviews for job applicants. It's designed to analyze you in a video and derive information from it.
Given the ineptitude we've seen with AI in many industries, and the fact that whoever built this probably only made it for neurotypicals, I am fucking horrified.
@lowd Good to know, as we were just told by our professor that our Uni is using the service.
@lowd It's entirely possible to write software that accurately detects 100% of AI written text... as long as you don't look to close at the false positive rates...
@lowd (hint, also 100%)
AI should be limited to medical applications only.
That's the only benefit of AI that I can see.
It's also a huge energy waster.
@lowd I hated Turnitin. Teachers really n it way too much. There have been SO many errors/overstated percentages I would think they'd get rid of it or replace it. It's not good.
I argued the case that my institution shouldn't enable AI detection in Turnitin because of how many false positives it reports. Fortunately management agreed.
The basic tool is good for flagging matches for a person to double check - but the AI add-on is not worth using. As long as there's a high chance of false reports it should never be used to accuse a student of cheating.
@lowd I still remember when I went to uni (not too long ago but before COVID) and how Turnitin would say my last name and page number were plagiarised. I can't remember the source, but such bullshit.
@lowd I had a prof in library school who insisted on using turnitin. This in a school that was all about fair use, plagiarism etc. I told said prof to pay me for my copyright before I’d assign it over to turnitin in perpetuity. Worked like a charm!
Now tell my own students to plagiarize all they want, they just end up cheating themselves. Plus anyone in the fields they’re going into can sniff out fakes from a mile away. Appealing to their better nature seems to work.
@lowd I'm finding that use of plagiarism detection tools is increasingly required by exam boards and awarding bodies. I am trying to write policy that maximises human interpretation of the results of such tools, for the reasons you outline.
@lowd Now that he knows that this stupid threat is an issue in his school, he should record himself while working on future assignments.
Well, at least until somebody comes up with the business model of telling teachers that these recordings are forged by AI.
@lowd My question is: so? So what if someone cheats on a written exam? Not only with AI, but in general. Can you do what you are supposed to do? Good, you pass. Did you know that doctors – yes, doctors – look up your symptoms while you talk to them? Are they cheating? How? Unless you are training your students for a job on a desert island, I guess. Or if you are testing quick thinking or reasoning or whatever, then talk to them.
@alsorew
In general, cheating in foundational courses makes one ill-prepared for the more advanced courses.
A physician isn't looking up what "anterior" means, they're checking diagnostic criteria. A software engineer isn't looking up what short-circuit evaluation is, they're checking API documentation. And so forth.
@lowd
@lowd I knew my kids turned in their essays to TurnItIn but I didn’t realize this is why. Yeah, it’s a problem
@MattFerrel I believe it was originally for plagiarism checking, but now they’ve added AI checking as well. It’s still up to teachers to make a judgment call about how to handle the results. But the evidence they give to teachers (a few numbers and some highlighted sentences) isn’t sufficient.
@lowd @MattFerrel Yes, and it is dangerous if used indiscriminately. False positives in the hands of dumb people have already done real harm.
@MattFerrel @martinvermeer Yes. I really like “Weapons of Math Destruction” by Cathy O’Neil, which does a great job of describing this class of harm and examples of it.
Not a huge harm for us today, but the system is bad and it doesn’t have to be.
@lowd one layer deeper: what are the structural lessons for the student here?
@ryancoordinator Unfortunately, the immediate lessons are:
1. My teacher doesn’t trust me
2. I live in a world where things like this happen
3. I should make my writing look weirder and less polished so I don’t get accused again.
This mirrors problems many students have faced for years - being accused of cheating because they were stereotyped as not being smart enough to do good work. We are very fortunate/privileged and this whole incident should not have a big impact on my kid.
@ryancoordinator I’m very troubled that this is happening, because it will hurt other students much worse.
AI detection is a perfect use case for how NOT to use AI — secret, unaccountable, trusted by authority, and limited recourse.
I believe AI can be good, but it needs to be embedded in an appropriate system/context to mitigate harms.
And, ...
4. My teacher is a fraud.
5. My teacher cannot be trusted as a source of truth or reliable information.
6. My teacher fundamentally does not know how to do their job. And their superiors are not dealing with this problem.
...
@JeffGrigg @lowd
And maybe
7. The authorities I'm subjected to are incompetent clowns, and everyone is okay with that
8. Being right is less important than special pleading
9. Abrogation of responsibility has no consequences because what really matters is structural power
@lowd @ryancoordinator Lesson 3 unfortunately has real world value now. When I find stuff in search results with the kind of "polished" look matching writing structures taught in schools, I assume it's content farm drivel (LLM or not) and immediately click back. Obvious ESL or ND writing for now is best efficient indicator I've found for authenticity.
@lowd Can you share some of these references?
@docfleetwood Sure!
1. OpenAI claims AI detection doesn’t work:
https://help.openai.com/en/articles/8313351-how-can-educators-respond-to-students-presenting-ai-generated-content-as-their-own
2. Ars Technica on risks of AI detection:
https://arstechnica.com/information-technology/2023/07/why-ai-detectors-think-the-us-constitution-was-written-by-ai/
3. AI detection biased against non-native English speakers:
https://arxiv.org/pdf/2304.02819.pdf
4. AI detectors can’t reliably detect AI-generated text:
https://arxiv.org/pdf/2303.11156.pdf
5. TurnItIn reports limitations of their own tools:
https://www.turnitin.com/blog/ai-writing-detection-update-from-turnitins-chief-product-officer
6. Washington Post article on false positives:
https://www.washingtonpost.com/technology/2023/08/14/prove-false-positive-ai-detection-turnitin-gptzero/
@lowd @docfleetwood Another for the pile: Vanderbilt University's well-thought-out (and documented) rationale for disabling TurnItIn specifically:
https://www.vanderbilt.edu/brightspace/2023/08/16/guidance-on-ai-detection-and-why-were-disabling-turnitins-ai-detector/
@lowd @docfleetwood Yeah and another point for that teacher: despite overwhelming evidence and testimony (as you've provided), that doesn't mean a business won't happily take your school district's money to "perform detection of AI writing" or whatever.
@lowd @docfleetwood But aren't there AI products for plagiarism detection that actually work? Including for plagiarised text that has been through an AI spinner?
@drgroftehauge @lowd @docfleetwood No, due to there being no viable definition of "works". By nature these tools can only give a probability, and their formulas for computing this probability are i) different) and ii) biased one way or another.
Add to this the fact that the human who reads this probability and gets to decide whether to accept or reject the homework will use an arbitrary threshold.
There is *no* way this can "work".
@aaribaud @lowd @docfleetwood Huh? You look at the student text, you look at the text it was plagiarised from, done. If there are many instances of plagiarism then you are on to something.
My concern here is that someone who doesn't know what academic misconduct is will say that anything that involved AI at any point is bogus. Because we can't say if something was regurgitated by an LLM.
@drgroftehauge @lowd @docfleetwood
Several points here, the main ones being:
1) AI detection tools purport to detect AI use, which is not the same as plagiarism.
2) Even when flagging for AI use some text which incidentally is plagiarism, AI tools will not provide references to the plagiarized sources. That makes the comparison proposed hard to achieve.
@drgroftehauge @lowd @docfleetwood Beside these points, yes, we can't tell well whether some LLM was used, and that is why AI detection tools are bad for assessing a student's work.
@drgroftehauge @lowd @docfleetwood Afterthought: I don't see how LLM tools, specifically would work for plagiarism detection, let alone proof. They're meant to either compute probabilities about their input, or, for generative tools, to complement their input in a probabilistically (but hardly semanntically) sound way. For that, they construct a model of their training corpus in such a way that tracing but from their output to specific sources is extremely hard, if at all possible.
@drgroftehauge @aaribaud @lowd @docfleetwood That sounds more like text matching. Text matching is not the same as plagiarism.
@drgroftehauge @lowd @docfleetwood it's inherently difficult to detect if something was written by AI because those AIs are trained to produce human-like text.
Good luck finding anything that can detect it reliably.
@lowd The grifts go even deeper. I was talking to a colleague recently whose college-age son (and all his friends) had been paying some service $30/month to re-write their assignments so that they supposedly wouldn't get caught by "AI detectors"
@ricci Actually, come to think of it, AI vs AI might be a fun game to be playd with computers.