I wrote a post in January about the bias in ChatGPT regarding global warming. I concluded it was, much like Wikipedia, occasionally useful with small things but totally unreliable when it came to matters of political correct orthodoxy. I've since experimented again to test my simple theory on why journalism has failed us so.
Your exercise also proves that RELEVANT KNOWLEDGE pays. Chat finally gave a decent answer when you gave it just the right counterexample, which you knew in advance to be relevant. If you didn't already know this, you could have asked and prodded forever without disturbing its confident idiocy.
Modern journalists have no life experience, so they have no knowledge and no basis for questioning the standard answer. They haven't worked in science or farming or a bureaucracy or anywhere except the closed circle of journalists.
Your exercise also proves that RELEVANT KNOWLEDGE pays. Chat finally gave a decent answer when you gave it just the right counterexample, which you knew in advance to be relevant. If you didn't already know this, you could have asked and prodded forever without disturbing its confident idiocy.
Modern journalists have no life experience, so they have no knowledge and no basis for questioning the standard answer. They haven't worked in science or farming or a bureaucracy or anywhere except the closed circle of journalists.