Great argument.
Anyone who’s ever tried using LLMs for in-depth research, be it legal, academic bibliographic, or even random stuff like podcast or tv recs, will confirm that it hallucinates quite often.
So you need humans to make sure it isn’t churning out bullshit.