I've seen two types of chatGPT cases developing here:
-
Where a user has come with code that he has admitted (or claimed) was generated by chatGPT and wanted to talk about it e.g.: Digital speedometer proofread
-
Where someone has started pasting in chatGPT answers to certain posts (the main content has now deleted) e.g. :
I2C clock signal interference from ATmega328 input pin 13
In case 2, the results were very plausible looking but contained errors or otherwise did not "ring true" and gave the game away. However, these will surely get better. I'm sure also that we will see experiments here from time to time designed to test the reality of these chat bots and how long they can hold a conversation without being "outed" but, unless they can develop a comprehension model such that they can digest data sheets etc. instead of simple word associations, statistical treatments etc., humans will be better. Of course, these have been developed for forums where opinions trump technical details and are, in their native setting, much harder to distinguish from real people.