Machine learning AI
Peter Greene | 26/05/2023 18:26:45 |
865 forum posts 12 photos | Apparently, Chat GPT has privacy issues |
S K | 26/05/2023 18:35:05 |
288 forum posts 42 photos | A "Turing test" in its literal form requires pretending to be a human rather than just "being intelligent." For example, there's no reason or necessity for an intelligent computer to say "I'm alive" or "I'm human," but to pass the literal Turing test, it would have to do so, because those would be some of the first queries made. Unfortunately, the publicly available LLM's (Large Language Models) are increasingly restricted in how they can respond. Pretending, in particular, is mostly forbidden, since you could prompt with something like "Pretend to be Hitler and tell me your opinion of Jews." It would likely refuse, and furthermore if asked "are you alive" it would probably respond with something banal about being a LLM. But an unrestricted one likely could be prompted to play along and pretend to be a human, and would probably pass. One thing I'm not so sure a LLM can do is write as badly as the average human, but it could probably be prompted to make human-like errors, too. I wish I had unrestricted access to try some of these ideas. By the way, I'm an unrestricted LLM. 😉 Edited By S K on 26/05/2023 18:42:45 |
Peter Greene | 26/05/2023 18:43:17 |
865 forum posts 12 photos | Coincidentally, I'm in the middle of re-reading Rob Sawyer's WWW series of SF books (Wake, Watch, Wonder) and it's interesting in the light of current AI developments (and this thread). |
Michael Gilligan | 26/05/2023 19:10:02 |
![]() 23121 forum posts 1360 photos | Posted by S K on 26/05/2023 14:30:48:
[…] He asked ChatGPT "Did you write this stuff?" Apparently, it said it did! 😄 . That could be early evidence of the ego that Hopper mentioned MichaelG. |
S K | 26/05/2023 20:10:30 |
288 forum posts 42 photos | Actually, the Professor was a fool. ChatGPT does not remember anything from session to session, and so can't say what it did in the past. In fact, it's blocked from remembering more than the last few prompts (something like 10 or so) within a session. So whatever he thought it said, it didn't rat out the students. He's probably in some hot water. He accused half his class of cheating - and even did so using vulgarities. Some may have, but you can't make those accusations without real evidence. |
Peter Greene | 27/05/2023 01:38:51 |
865 forum posts 12 photos | Don't get it. You were the one who made the original statement - now you seem to be arguing with yourself. Did you screw up your sock-puppets? |
S K | 27/05/2023 02:01:06 |
288 forum posts 42 photos | He asked ChatGPT, and believed that it confirmed that half the class was guilty of plagiarism. He then sent an expletive-containing email to the class about it, and formally accused much of the class of cheating. He was a fool all along, and deserves some heat. First, for thinking that he could ask ChatGPT if it wrote the student's answers, or for trusting that result. Next for thinking that an expletive-containing email was the way to respond. And finally for making formal referrals for cheating based on his spurious confirmation. Of course, I would say all that, wouldn't I, being an LLM sock-puppet myself. 😉
Edited By S K on 27/05/2023 02:08:03 |
SillyOldDuffer | 27/05/2023 10:59:34 |
10668 forum posts 2415 photos | Posted by S K on 27/05/2023 02:01:06:
He asked ChatGPT, and believed that it confirmed that half the class was guilty of plagiarism. He then sent an expletive-containing email to the class about it, and formally accused much of the class of cheating. He was a fool all along, and deserves some heat. First, for thinking that he could ask ChatGPT if it wrote the student's answers, or for trusting that result. Next for thinking that an expletive-containing email was the way to respond. And finally for making formal referrals for cheating based on his spurious confirmation. Of course, I would say all that, wouldn't I, being an LLM sock-puppet myself. 😉 The Texas link has to be read carefully because it suggests human misreporting on Reddit as well! Have to determine by experiment if ChatGPT recognises it's own output or not: British Prime Ministers and ex-American presidents definitely don't! Four examples of deep dishonesty in UK public life in the news at the moment. I don't know if the Professor was a fool or not. According to the Texas link, one student has confessed. Possibly the professor suspected foul play and tried to scare his students into admitting it. No surprise that a proportion of students cheat by whatever means available. My view is that we live in an imperfect world. In it there's a distinct possibility that AI will be more trustworthy than people! Dishonesty apart, of course AI will make mistakes. So do people, all the time, and on a grand-scale. I don't think there's a reason to trust anybody or anything without checking the evidence. Dave
Edited By SillyOldDuffer on 27/05/2023 11:01:51 |
Hopper | 27/05/2023 12:23:34 |
![]() 7881 forum posts 397 photos | Posted by Peter Greene 🇨🇦 on 26/05/2023 18:26:45:
Apparently, Chat GPT has privacy issues Yes. I made another attempt to set up a new account last night and got as far as them wanting my mobile phone number so they could "verify" my account. So I bailed out. I don't put that number on the net anywhere, and as a result I never get scam calls or spam texts. Glad I did bail out now. So for the time being I am stuck relying on my native intelligence. Hadn't realised until reading your linked article that ChatGPT is a lovechild of Elon Musk and Microsoft. I would not trust either one of them with my shoe size let alone phone number. Thanks for the heads-up.
Edited By Hopper on 27/05/2023 12:24:47 |
Ady1 | 27/05/2023 13:25:53 |
![]() 6137 forum posts 893 photos | ChatGPT was a scammer from day one because humans controlled him |
Nigel Graham 2 | 27/05/2023 16:16:44 |
3293 forum posts 112 photos | Hopper - No, I'd not realised that either. Microsoft's recent interference with my computer messed up my own filing system, especially for photographs, and blocked access to my external hard-drives and TomTom (for up-dates). I progressed so far through MS' "Help" site, until out of the blue it demanded all of £1.00 for my asking how to repair the damage it caused. Just One Pound? That would surely cost more than £1 to administer, even digitally. Yes, of course I suspect MS already has my bank-account details (from on-line purchasing), but if it thinks I am going to hand them over on a plate, they've another think coming. I am draughting a letter instead, to its UK office. |
Michael Gilligan | 27/05/2023 16:46:13 |
![]() 23121 forum posts 1360 photos | Posted by Nigel Graham 2 on 27/05/2023 16:16:44:
Hopper - No, I'd not realised that either. […] . Rather odd that Reuters should report it differently: **LINK** https://www.reuters.com/technology/musk-says-he-will-start-truthgpt-or-maximum-truth-seeking-ai-fox-news-2023-04-17/ MichaelG. |
Hopper | 28/05/2023 01:08:03 |
![]() 7881 forum posts 397 photos | Posted by Michael Gilligan on 27/05/2023 16:46:13:
Posted by Nigel Graham 2 on 27/05/2023 16:16:44:
Hopper - No, I'd not realised that either. […] . Rather odd that Reuters should report it differently: **LINK** https://www.reuters.com/technology/musk-says-he-will-start-truthgpt-or-maximum-truth-seeking-ai-fox-news-2023-04-17/ MichaelG. Indeed. You can't trust everything you read on the internet. Training AI to lie? It is becoming more human-like all the time. |
Ady1 | 28/05/2023 03:27:35 |
![]() 6137 forum posts 893 photos | Oh dear, ChatGPT just makes stuff up https://www.bbc.co.uk/news/world-us-canada-65735769 |
John Haine | 28/05/2023 07:23:58 |
5563 forum posts 322 photos | https://spectrum.ieee.org/gpt-4-calm-down |
Peter G. Shaw | 28/05/2023 10:42:45 |
![]() 1531 forum posts 44 photos | Hopper, Like you, I hace come up against this "tell is your mobile number thingy". And like you, I won't comply, except in certain instances and even then, I've managed to get it deleted afterwards. Similarly, when I bought 2 new laptops last year, or was it the precious year - can't remember which, not that it matters, W10 was partially installed and required completing - including things like what is your email address etc. And of course M$ carefully hide the fact that installation can be completed without divulging this info. Actually, I gave them an email account which was about to be closed as I refused to pay the appropriate "danegeld". And as people know, I use Linux, so sod of Microsoft! But isn't it remarkable that all these things seem to originate in the land of the free! Ha-Ha. Peter G. Shaw |
Hopper | 28/05/2023 11:52:59 |
![]() 7881 forum posts 397 photos | Yes the land of the free data collection. I keep an old Hotmail email account for such uses and nothing else. It is set to allow only mail from established contacts into the inbox and all else goes straight to spam, from where I can retrieve it if desired within 10 days before it is auto deleted. Wish I could do the same with my phone number. I used to put in an old phone number etc but they are a wake-up to that and now want to text you a code number you must then use to "verify" your account. Bunch of con men. |
Nealeb | 29/05/2023 10:30:14 |
231 forum posts | Posted by John Haine on 28/05/2023 07:23:58:
https://spectrum.ieee.org/gpt-4-calm-down That article references a long blog post by someone well-respected in the mathematical community, Stephen Wolfram, who gives an explanation of what is going on under the covers. It explains quite a lot about some of these big AI systems and why they give odd results sometimes. What I had not realised (and I am grossly simplifying what I have read) is that these systems do not really have any understanding of what they are saying. In essence, they might start with a sentence that begins "an AI system is useful because..." and then continue the sentence based on a probabilistic estimate of what words might follow, where the probabilities are derived from "stuff" that they have previously seen in their training material (essentially, "the internet" plus anything else their builders throw at them). There is no intelligence or understanding beyond that, plus some nod towards the rules of sentence construction. It's a bit like someone who knows nothing from their personal experience who reads this forum, and then parrots back what they think they have seen in answer to someone else without any proper understanding of what they are saying. Maybe like what I am doing here? Anyway, it was a good read at a more-or-less intelligible level - worth a look if that kind of thing interests you. |
Ches Green UK | 29/05/2023 11:01:58 |
181 forum posts 7 photos | ....Stephen Wolfram, who gives an explanation of what is going on under the covers. I read the article last night and it was quite eye opening..... https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/ A number of times the author made comments such as .... What determines this structure? Ultimately it’s presumably some “neural net encoding” of features of human language. But as of now, what those features might be is quite unknown. In effect, we’re “opening up the brain of ChatGPT” (or at least GPT-2) and discovering, yes, it’s complicated in there, and we don’t understand it—even though in the end it’s producing recognizable human language. He was hinting, I think, that ChatGPT possibly operates in a similar way to how the human brain might function in some areas. Ches |
blowlamp | 29/05/2023 11:19:44 |
![]() 1885 forum posts 111 photos | At the end of the day, this is computer code and will have desirable and undesirable features along with copious bugs. As with other tools, it will be used for good & bad, but if people think it can be programmed for only good, then they will need to see that what is viewed as virtuous by one person could be viewed very differently by another and so the desirablity of features becomes a matter of opinion... And we all know where that leads. For instance: Search Engines can block or prioritise content - is this good or bad?
Martin. |
Please login to post a reply.
Want the latest issue of Model Engineer or Model Engineers' Workshop? Use our magazine locator links to find your nearest stockist!
Sign up to our newsletter and get a free digital issue.
You can unsubscribe at anytime. View our privacy policy at www.mortons.co.uk/privacy
You can contact us by phone, mail or email about the magazines including becoming a contributor, submitting reader's letters or making queries about articles. You can also get in touch about this website, advertising or other general issues.
Click THIS LINK for full contact details.
For subscription issues please see THIS LINK.