3 Comments

I'm still somewhat surprised that A.I. is making stuff up, in place of accurate information. Fake court citations is fairly complex, compared with making up the name of the owner of a real business, for example.

As a baseline product, I would have thought guardrails to prevent these outputs would have been top of the list, unless someone specifically asks for fiction.

Or is this what invariably happens when the machines take over, and there's no way to fix it, and eventually we won't know or care?

Expand full comment

Yes, we are seeing people get burned over-trusting ChatGPT. Note that the lawyer in the story was a first time ChatGPT user. Trusting ChatGPT without verification is pilot error. For now, we have humans in the loop to catch errors, but its a scary thought that we might end up depending on AI for some decisions when it could have errors still and wont have a failsafe. Making AI more reliable is key to making them more useful and less risky.

Expand full comment

A long road ahead. I can't wait to find out how it ends! 😁

Expand full comment