TL;DR -- We have looked at Bard and ChatGPT. The latter had our interest while records at Sherborn were scrutinized. We found out unexpected data about Thomas and Margaret that was sufficient to raise questions about what we know. We had already started the FAQ way back. So, with these two subjects for work, we can deal with technology as we adjust the story to account for the new information. In short, that is not something to moan about; no, it's right in line with how work ought to proceed.
This is a continuation of the last post (Intro to Bard). Bard is the generative LMM approach being tested by Google. As we have said, technology is our interest which is a broad field. Of late, interest has gone with unbounded enthusiasm toward what we call AIn't; in particular, though, we will look more at ChatGPT which came on the scene in November of 2022.
This post recaps activity in two areas, but, first, here is a brief summary of posts related to the subject:
- How dumb is AI? -- this 2021 post reported on an article that was in the IEEE Spectrum. The IEEE.org is a technical organization that is over 100 years old and deals with the core of technology (namely, power and its meaningful use). AI and knowledge processing rely upon the work of IEEE folks to provide computational resources.
- Current challenge -- this 2022 post discussed several topics; one of these was the Harvard Business Review's look at AI. Too, at the time, the ACM.org which consists of folks from academia and from technical management was paying attention.
- A(rtificial) I(intelligence) researched properly - this 2021 post did a recap on the technology use of the TGS, Inc., since the beginning. This bullet is here as it is a parallel activity to what is discussed below.
- Two searches to support future work as references: AI (the theme of today's news); truth engineering (principles from engineering services related to computational systems and their issues).
- In February of 2023, we were asked about ChatGPT since there was a lot of discussion by educated folks about its abilities. By that time, the enthused had already run down the road with uses even to the extent of introducing packages using the system. On the other hand, we were more cautious as there had already been issues made known. There was a clever little phrase associated with the issue: hallucination. To us, the underpinnings are mathematics which we can lift to general awareness. So, some of the behavior is due to the way that the conversational aspect is tuned. Then, we have a positive tone that was assumed to be the right one for this type of interface. Know-it-all is what I think. Technically, we are talking a type of 'interpolation' and will be explaining what we see. But, there is another issue. The push to have unsupervised learning which is motivated, somewhat, by omniscience considerations (look at the Singularity arguments as a huge factor in this discourse). But, the input was crap for several reason. So, why would not the system trained on the bad data has issues, too? Somewhat, one could talk "liar" as the resulting paradoxical behavior except that there is no consciousness involved nor the choices related to that ability of humans. Frankly, it's a mess. Crap cannot be trained out after it pollutes the thing. The method dropped the limitations that might have been brought by supervised learning. Lots to discuss.
- At the same time that we were looking at ChatGPT and being involved in discussions, a researcher stepped through the Sherborne records and found that all of the children of Thomas and Margaret had been baptized in England. There were two things for us to look at. One had to do with Gloucester's 400th. If Thomas and Margaret were there, we needed to write this up. We had expected to use 1624 and were waiting for material that supported that year. The researcher reported his findings on WikiTree. Their response was to split the Thomas Gardner profile into two persons. The father of the children was married to Margaret Fryer. He was written up by Dr. Frank. The other profile has little information except for reporting the data and the two profiles. Research will fill in the pieces. How did we miss this work? Well, the involvement with catching up on ChatGPT took some time and attention. We were only a couple of weeks late to the show and made our remarks about the state of affairs.
Now, an interesting twist is that both ChatGPT and Bard have erroneous data. How do we make this known so that there is a change with supporting remarks left behind? We need to look at the whole affair and the technical details. So, we're working on that, too.
08/05/2023 -- The post on reseraching AI (3rd bullet, 1st list) has risen, of late, to being on the top-watched list. Too, we are working on GB XIII, 1 which will cover our work so far this year. ... With respect to Thomas and Cape Ann, we will adopt "new directions" as the theme.
08/06/2023 -- Pointed to the Getting Technical post which references work with Bard. Recently, we have give Bing's little thingee a run through. We want to do a little technical bit, on a subject that is newly framed, at each of these and do a comparison. One future theme will address how to get these things to settle down in order for them to help us hone our reason and understanding (ala Kant, for one). Right now? It's a mess, quite frankly.