Brilliant! The AI term for this is hallucination as mentioned in Wikipedia:- OpenAI acknowledges that ChatGPT "sometimes writes plausible-sounding but incorrect or nonsensical answers". This behavior is common for large language models, and is called "hallucination". I guess we also need to understand how to use this new 'toy'. If we try to trick it, we may get nonsense. If we ask it for an opinion, its views may not align with ours. If we ask for information on something unique that we have posted on the net, it will return our text. Whenever we seek information, we can either take the answer at face value or search for validation. Validation can be tricky because so much internet content these days is just 'copy/paste screen filler'.
All AI applications are only as good as the data and method they have been trained on. That's the current problem that you are seeing with ChatGPT - it is using Reinforcement. Generally there are three methods of training - Supervised: Given examples of correct results to input data - both true and false need to be given to prevent incorrect deductions Unsupervised: No guidance - left to deduce own rules. Re-inforcement: mostly left to own deductions but seeks feedback The data source for ChatGPT is unchecked and the system is looking for training feedback from the masses. A real cheap way to improve the commercial product using unpaid workforce! I was involved with a mobile platform that would do autonomous mapping of indoor occupied environments and showed the problems with supervised/reinforcement teaching models. Used Lidar to build model of surroundings and was given a mathematical hunger to map new rooms rather than consolidate on improving existing mapping. Turned it on and quickly found it had a lemming like attitude to patio windows - would repeatedly drive at full speed into the window - lidar passed through the glass and it took the returns from outside as a new room to map. Next version had vision so we used transfer learning to teach the AI how to recognise a full length floor to ceiling glass window. All was good until the first snowfall outside - back to it's running at the window at full speed. In training it how to recognise the windows we had used images for good and bad - but in all cases the external environment was temperate Spring/Summer/Autumn/Winter - no snow which softened outlines or skewed the gray scale distribution in the image. Lots of cloud based AI/ML resources from the big players that you can run your personal AI/Machine Learning on your own data. Better performance but at a commercial cost. Latest one I have done is to look at vibration data from rotating prime movers pumps and give warning of failure. Condition monitoring to turn periodic/breakdown maintenance into predictive with associated cost savings. A few of these are free to start: Generative AI | Google Cloud Free Machine Learning Services - AWS Ross
Can it perform accurate translations? Would be the answer to me lack of time to learn French (and Dutch, and Hebrew, and Greek...), so I can stop staring helplessly at a queue of books about the Tunisian Campaign, Indochine, the Yom Kippur War, the fall of the NEI, the Metaxas Line, and so on!
OK, let's see; can you translate english text into french? ChatGPT Yes, I can certainly help you translate English text into French. Please provide the text you would like to translate, and I'll do my best to assist you. Can it perform accurate translations? Would be the answer to me lack of time to learn French (and Dutch, and Hebrew, and Greek...), so I can stop staring helplessly at a queue of books about the Tunisian Campaign, Indochine, the Yom Kippur War, the fall of the NEI, the Metaxas Line, and so on! ChatGPT Yes, I can provide accurate translations. Here is the translation of your sentence from English to French: "La réponse serait pour moi le manque de temps pour apprendre le français (et le néerlandais, et l'hébreu, et le grec...), afin que je puisse arrêter de regarder impuissant une file de livres sur la Campagne de Tunisie, l'Indochine, la guerre du Kippour, la chute de l'Indonésie néerlandaise, la Ligne Metaxas, et ainsi de suite !" Please note that while the translation is accurate, some book titles and historical events may have specific French translations or names that are commonly used in the context of historical literature, and those might differ from a literal translation.
No better than any existing website which does translation. I would just use one of those, because I would know my answer is not being spat out of a system designed to create a plausible answer to my response, rather than an accurate one.
Yes and no. What you can then do is ask it (in the language you don't know) things like: Is there any evidence of the age, sex or geographical location of the author of this text? Where might I be likely to see/find such a text? What changes might be made to make this text more/less formal or traditional/modern? Is there a reason for the author's use of the term A over B? How do you pronounce "This"? And so on.
Another idea is to get a second opinion on the translated text. I put the French translation back into ChatGPT and into Deepl, this is what I got from both:- "The answer would be for me the lack of time to learn French (and Dutch, and Hebrew, and Greek...), so I can stop staring helplessly at a queue of books about the Tunisian Campaign, Indochine, the Yom Kippur War, the fall of the NEI, the Metaxas Line, and so on!" ...a slight variation. Maybe someone can check on another translation platform.
Um... yes and no? Many of your points there aren't actually about reading a text in another language. They might be useful questions to ask, but not what I was addressing. You can ask it these things. But I need to stress, again, that these bots are intended to produce plausible results, but not necessarily accurate ones. I know this is all of 10 months ago, but academics discovered that when they asked ChatGPT to generate replies backed by references, the algorithm generated fake references. e.g. Why does ChatGPT generate fake references? - TECHE What bot is doing is not thinking. It is generating text which is supposed to be a plausible answer to the question. It's true that a lot of text has been fed into these things so it may be able to give an accurate answer to "How do you pronounce a word" or find answers to biographical questions about the author of a work. And it might be able to give you an answer to the "the reason for the use of term A over B" based on the definitions of those words. But the answer to any question posed to it could be shit that it made up. But again, I don't think it has any better ability to generate an English translation of a section of text in another language than any website that does translation because that's the bot is probably making a call out to another website to do the translation in the first place.
Quite funny on opening this page during this discussion. Chrome mobile triggering its auto translate. I may start calling it Chát Jeut Puet Te.
I just meant that in one sense you're completely right: it doesn't seem to offer better translations than some other sites, but I wouldn't agree that it is 'no better' than other good translation software because of the highly versatile supplementary functions I gave examples of—ones which do not feature elsewhere. I appreciate that you were speaking only of the effectiveness of the translation. I will add that in terms of translating to and from, you have the ability to specify the style of the translation, which is quite novel: "I'd like a translation of this French menu in the style of an Edwardian child from a wealthy family."
Those are interesting points, Charley. I'm looking for the "thumbs up" emoji but I don't think we have one, so...
Binned just now. Using AI to generate more convincing syntax for spam, I reckon. Turing test failed for wider reasons, but maybe first indications of 'bloody AI!' for me. It'll have its good points (saw a chap on Twatter with no programming experience writing odd little apps with it), but we are humans, and will use it negatively the first chances we get.
It doesn't quite work—you can see the stiches, as it were. "It sounds like a fantastic series" is very natural. But "..., providing a behind-the-scenes look at their restoration work" is clearly ripped from promotional material. If it only 'sounds like' a fantastic series, the implication is that the author hasn't seen it; and if he or she hasn't seen it, he or she cannot know what it 'provides', only what it seeks, claims or aims to provide. And 'such insights' when no insights have been detailed also fails. As ever though, the currently level of A.I. technology is the worst it will ever be! Interesting times.
Well the predictions of job losses from AI applications has legs - ChatGPT has just taken out it's CEO Sam Altman. Ross
Royal Institute recently had excellent lecture on AI - couple of take homes First on ChatGPT - function initially based on texting autocomplete so suggests not the most correct answer but based towards the most common answer. Second some examples of Tesla AI coming across something it had not been trained on and trying to resolve from it's rule base Horse and Cart Flags for traffic lights Ross
Beautiful stuff. DPD AI customer service. Careful now, corporates. Just because something's trendy doesn't mean it's a good idea. : https://twitter.com/ashbeauchamp/status/1748034519104450874?t=T0IyXaypfmmTQ5FGp3yAfg&s=19
"O'right bab, how's it gooin?, the bleedin' parkin's murder up our road, the walkin's killin' me". Cue armed response, door put in, tea and medals. Police worried 101 call bot would struggle with 'Brummie' accents Oh joy, it's a brave new world. Kind regards, always, Jim. P.S. This post was randomly generated by an AI chatbot, having been fed storyline parameters and keywords, by an eejit
Brave new world indeed. Admittedly this is only with regards to a science fiction/fantasy fiction micropress, but this is part of their announcement for why they are closing.