I finished reading Danny Gregory's blog entry on Zork and was inspired to drop him a quick dude, well written! type comment when I ran into a fairly common snag. Whatever I wrote looked like it was from an automated bot engaging in comment spam. Anything I could think to write (Nice, Well Said!, Keep the great posts coming!) seemed so obviously canned that surely a real human wouldn't write such a thing.
This isn't a new problem. Back in the day, I used to answer live chat requests on my company website. Those chat sessions always made for interesting discussions (note to self: find and install a new live chat platform). One challenge that I'd run into is that some folks wanted me to prove that I wasn't a bot. No matter what information I provided to them, it only tended to support their hunch that they were talking to a machine. I don't have any of those transcripts handy, but here's how I recall them going:
Me: Howdy! How can I help you? Visitor: are you a real person? Me: Sure. Visitor: prove it. Me: Let's see. My name is Ben. It's currently cold outside. I'm sitting here in sweatpants. Visitor: are you really a person? Me: Yes. Really. I promise. ...
Just reading the above transcript I barely believe I'm human.
I suppose this all falls into the category of First World Internet Problems. But I find it fascinating that in our race to make computers appear more like humans, we've actually made humans appear more like computers.
There's no doubt that the Turning Test is a tough problem to solve. But who would have thought this sort of reversed Turning Test would be, in some respects, just as tricky?
Was this post autogenerated? Would you know if it was? Would it matter?
No comments:
Post a Comment