Gary Wooding | 07/08/2023 12:00:32 |
1074 forum posts 290 photos | I've just done a little test of BING - which is Micro$oft's AI based search engine. I asked the question 'how many vowels are there in the words o this sentence' (notice I omitted the 'f' from 'of' Here is is the response... There are 10 vowels in the words of this sentence. A, E, I, O and U are the vowels in English. Here is how I counted them: how: 1 vowel (O)
Total: 10 vowels It clearly has problems with counting and adding up. What do you think of AI now?
|
not done it yet | 07/08/2023 12:31:05 |
7517 forum posts 20 photos | It clearly needs to do some learning. How to read and how to count - as well as actually knowing that there are only 5 vowels in our alphabet! |
Michael Gilligan | 07/08/2023 12:35:43 |
![]() 23121 forum posts 1360 photos | Interesting that having correctly listed the five vowels, it then declared Y to be a vowel … Other I.T. providers are available. MichaelG. . Edit: __ Never forget that it was Micro$oft that introduced the utterly crass ‘Office Assistant’ into business computing https://en.wikipedia.org/wiki/Office_Assistant Edited By Michael Gilligan on 07/08/2023 12:45:50 |
John Haine | 07/08/2023 13:48:07 |
5563 forum posts 322 photos | Y often serves as a vowel depending on context as Bing would probably tell you Or Wikipedia. |
Michael Gilligan | 07/08/2023 14:19:43 |
![]() 23121 forum posts 1360 photos | Posted by John Haine on 07/08/2023 13:48:07:
Y often serves as a vowel depending on context as Bing would probably tell you Or Wikipedia. . Then the fact that BING would contradict itself just serves to prove my point. MichaelG. |
Speedy Builder5 | 07/08/2023 14:33:08 |
2878 forum posts 248 photos | Has it done a spell check (in who knows which language) before counting?? |
Mick B1 | 07/08/2023 15:26:33 |
2444 forum posts 139 photos | When satnavs first appeared, the salesmen in our software office eagerly acquired them (being an 'early adopter' was of course a characteristic they were keen to encourage). For 2 or 3 years the office rang from time to time with gales of laughter as salesmen loudly described their travels and where their satnav had thought they were. Once these tales began permanently to dry up was when I finally decided to buy one. With few exceptions I've found mine boringly reliable, and more so now I can just use my phone. I think the same might happen with AI - once a sensible use for it is found. |
Chris Mate | 07/08/2023 19:33:11 |
325 forum posts 52 photos | You need to start thinking differently about AI as projected as some form of intelligence. |
Ady1 | 07/08/2023 19:54:17 |
![]() 6137 forum posts 893 photos | Calling it AI is a misnomer, it's just really fast hardware developed from the graphics industry The software is the key and that's still a slow development slog Eventually someone will develop software which can write software and that's the point at which things start to really shift I've only ever seen one really good development emulator and that was for DOS |
SillyOldDuffer | 07/08/2023 20:16:46 |
10668 forum posts 2415 photos | Posted by Ady1 on 07/08/2023 19:54:17:
... The software is the key and that's still a slow development slog Eventually someone will develop software which can write software and that's the point at which things start to really shift ... Getting exciting. Watch this youtube video in which ChatGPT writes Python. There are other examples... |
John Doe 2 | 08/08/2023 01:42:21 |
![]() 441 forum posts 29 photos | AI at the moment, is a very sophisticated development of that experiment where you fill a jar with sweets and ask people to estimate how many sweets are in the jar. The replies populate a bell curve - the more replies you can get, the stronger the bell curve becomes. Given a large enough number of responders, the value at the centre of the bell curve turns out to be very close to the actual number of sweets in the jar. There are a few stupidly low answers, and a few stupidly high answers, but the majority form an average which spookily turns out to be very close to the actual number. The 'Ask the audience' feature of "Who wants to be a millionaire" uses this phenomenon; Ask enough people and you will find the correct answer. The largest audience selected answer is almost always the correct answer. Chat Gpt is an AI algorithm which has been "fed" many thousands of pages of text, graphics, photographs etc., and has the ability to translate a question and form a bell curve from that data to any question asked of it. In that respect, it does not have actual intelligence, but draws from its human derived database to select correct answers, via the bell curve method. My mother recently wanted our family members to write poems based on a particular set of phrases relevant to our family, as a fun exercise. My son, being of a very technical mind-set asked Chat Gpt to write one and he input all the phrases as its data. Within seconds, it had generated a full poem, in proper verses, and all rhyming properly, using all the phrases - something that would take each of us weeks to do. So, in that respect, AI can be a very useful tool. It is not self aware and is merely creating a response from a myriad of human responses, like a very sophisticated 'ask the audience'. It can be thought of as a more sophisticated version of the Google search engine, where you type a question into the search bar, and it will respond with pages of 'hits' relating to that particular question. |
Robert Atkinson 2 | 08/08/2023 07:46:53 |
![]() 1891 forum posts 37 photos | The big issue that is emerging with ChatGPT and the like is that they makes up "facts" to produce an answer. This has included false reports of sexual hassement coplaints against an individual. I believe this was GPT-4 not Chat-GPT but there are a lot of low quality reports about it on the internet. Robert |
SillyOldDuffer | 08/08/2023 09:30:50 |
10668 forum posts 2415 photos | Posted by Robert Atkinson 2 on 08/08/2023 07:46:53:
The big issue that is emerging with ChatGPT and the like is that they makes up "facts" to produce an answer. This has included false reports of sexual hassement coplaints against an individual. I believe this was GPT-4 not Chat-GPT but there are a lot of low quality reports about it on the internet. Robert Might be good evidence that AI is getting more human, not necessarily a good thing! Humans make up "facts" to produce answers, and also make false sexual harassment complaints. Slander and libel keep lawyers well paid, we fall for scammers, politicians love propaganda, civil servants are 'economical with the truth', newspapers are biased, people believe in cults and ridiculous conspiracy theories, cock-ups are covered up, and toddlers plastered in chocolate deny they ate the cake. Although recent AI developments are a breakthrough it has a long way to go. Filtering fact from fiction is particularly difficult - look at how bad people are at doing it! Dave |
S K | 08/08/2023 19:10:55 |
288 forum posts 42 photos | ChatGPT and its cousins are "large language models" and are not AI's. The "intelligence" they exhibit is purely emergent; an amazing side-effect of training them to write. Still, judging their "intelligence" is amusing, given how well they can do. As one to write a college essay on a popular topic (one with a lot of text about it in its training set), let's say the American Civil War, and it would likely pass. Indeed, it may likely do considerably better than the average college student. Certainly its English and grammar, punctuation, etc., will be better than average! But ask it to write an essay about something that is new or obscure, say "write an essay about how LK-99 works" (the new, highly controversial "room temperature superconductor" ) and it will likely fail because it had little or no training data on that subject. Asking them "trick" questions will also often fail, since they have no in-built reasoning skills at all, they just predict text. And we've all seen how confident in their own opinions they can be, even when blatantly wrong: their training data was full of very authoritative and confidently written text. The obvious holes in their abilities, such as a lack of math skills, are being patched by allowing them access to other tools. All that said, these large language models do at least feel a little like general AI, at least when they aren't goofing up. Edited By S K on 08/08/2023 19:13:12 |
Alan Charleston | 09/08/2023 07:22:47 |
157 forum posts 26 photos | Meanwhile the AI is thinking - "What an idiot. He left the f off of. I won't mention it - I don't want to hurt his feelings." Regards, Alan |
Rob McSweeney | 09/08/2023 08:47:11 |
98 forum posts | Am l the only dinosaur to have read the title of this thread and assumed it referred to aluminium? |
Michael Gilligan | 09/08/2023 08:57:08 |
![]() 23121 forum posts 1360 photos | Posted by Rob McSweeney on 09/08/2023 08:47:11:
Am l the only dinosaur to have read the title of this thread and assumed it referred to aluminium?
. I was just about to observe that the subject line looked clear enough to me … when I noticed the subtly different font in Alan’s post and yours !! MichaelG. . Edited By Michael Gilligan on 09/08/2023 09:05:58 |
SillyOldDuffer | 09/08/2023 11:06:59 |
10668 forum posts 2415 photos | Posted by S K on 08/08/2023 19:10:55:
ChatGPT and its cousins are "large language models" and are not AI's. ... But ask it to write an essay about something that is new or obscure, say "write an essay about how LK-99 works" (the new, highly controversial "room temperature superconductor" ) and it will likely fail because it had little or no training data on that subject. Asking them "trick" questions will also often fail, since they have no in-built reasoning skills at all, they just predict text. And we've all seen how confident in their own opinions they can be, even when blatantly wrong: their training data was full of very authoritative and confidently written text. The obvious holes in their abilities, such as a lack of math skills, are being patched by allowing them access to other tools. All that said, these large language models do at least feel a little like general AI, at least when they aren't goofing up. ... Much depends on how we define 'intelligence'. It's interesting that SK's examples of failings in ChatGPT 'intelligence' are also human failings! Many people also exhibit 'how confident in their own opinions they can be, even when blatantly wrong.' And the only way I could write an essay on LK-99 is by reading the Wikipedia article, or by researching in a Science focused University Library. ChatGPT and I start from the same place. Like as not human intelligence evolved a result of our distant ancestors developing language. Clever humans didn't invent language because they were specially intelligent from the get go; it evolved. And if human intelligence evolved, then so can AI. Changing the subject slightly, Model Engineers rarely work with boiler pressures over 150psi. I see from the Wikipedia article on High Temperature Superconductors that extraordinary pressures are needed to make them work. CSHx super-conducts at 270 Gigapascals, which is nearly 40 million psi. Doesn't look like I'll be make any in my garage! Dave
Edited By SillyOldDuffer on 09/08/2023 11:08:41 |
Rob McSweeney | 09/08/2023 11:36:33 |
98 forum posts | Posted by Michael Gilligan on 09/08/2023 08:57:08:
Posted by Rob McSweeney on 09/08/2023 08:47:11:
Am l the only dinosaur to have read the title of this thread and assumed it referred to aluminium?
. I was just about to observe that the subject line looked clear enough to me … when I noticed the subtly different font in Alan’s post and yours !! MichaelG. . Edited By Michael Gilligan on 09/08/2023 09:05:58 It was more a reflection that to those of us in our 60's and above,and who haven't had much involvement with I.T. artificial intelligence is a concept beyond our radar. To me "A.I." would mean artificial insemination. Edited By Rob McSweeney on 09/08/2023 11:37:27 |
Robin | 09/08/2023 11:56:30 |
![]() 678 forum posts | I don't think AI will be a problem. If machines could take over the world it would probably be by ruining our economies with fake alarm stories on social media, promoting a cashless cryptocurrency base and tricking stupid Chinese lab workers in Wuhan to cook up plague pathogens. A big alarm bell to listen out for would be when rich people started trying to leave the planet Robin |
Please login to post a reply.
Want the latest issue of Model Engineer or Model Engineers' Workshop? Use our magazine locator links to find your nearest stockist!
Sign up to our newsletter and get a free digital issue.
You can unsubscribe at anytime. View our privacy policy at www.mortons.co.uk/privacy
You can contact us by phone, mail or email about the magazines including becoming a contributor, submitting reader's letters or making queries about articles. You can also get in touch about this website, advertising or other general issues.
Click THIS LINK for full contact details.
For subscription issues please see THIS LINK.