TL;DR -- Since we are taking a focus on technology as a thing deserving of attention, we will have to be regular in posting information. Expect that more regularly in 2024 as we commemorate the arrival of Thomas and Margaret (or not, depending upon our research)
---
We ought to have these on a regular basis and will starting next year. For this post, we'll look at a New Yorker article, briefly. Then, we'll add in a comment about a recent post. After that, we'll show a graphic with regard to one of our web servers. Then, we'll start a list of the web presence plus gather all of the technical posts together under various categories.
Ambitious task? Yes, we'll start today and finish this up with a week. At the same time, we will consider how to do these and with what frequency.
---
AIn't and AI. We'll be more specific next year.
- The Godfather of AI - See the New Yorker, 11/17/2023. This is the main article and dealt with an interview of Hinton. There was a genealogical notion brought up. He is Brit and a descendant of Rev. Boole of the logic and algebra that we all love. After casting about for some direction, he picked neural nets. One early act was popularizing the Boltzmann Machine. As we all know, none of the machines/methods found so far is all-powerful. Subsequent work ended up with the back-prop algorithm. In a sense, this is similar to numeric processing oriented toward resolving a multiple-body issue. Definitely, constraint satisfaction applications need a good look. One interesting tidbit is that the author of this article used Kafka's worlview as the basis for an example. This was written up in an issue of the 1986 Nature periodical. To note, please. On seeing an interaction with ChatGPT, he was astonished so as to talk "level of understanding" and even uttered "alive" in the context. He has seen lower-level reality in that his later work deals with neuromorphic approaches.
- The Economist as example - See the last post: Science and AI. This was motivated by seeing an article in the 11/25/2023 issue of the paper (not a magazine, they say) in which a reporter hypes some good work dealing with rogue waves. Now, everyone ought to be interested as waves are everywhere and densely sought by thinkers. Yet, in terms of the seas, this is old research with lots of data. Too, people have done an exemplary job in trying to understand the data. So, the researcher used the neural net to look at some pre-processed data where the mathematical elements were emphasized. Okay. Good results. But, a genetic (to be discussed) approach was about as capable. The researcher had a disclosurer at the top of his report. Did not TheEconomist writer not see this? Too, there is discussion about next steps. Our gripe? The use of AI as it encompasses much more than machine learning. Now, the post? Links to the data and the paper and the code itself which is at GitHub. That is how things will be, more or less, as research goes further.
Related to: TGSoc.org/papers |
What we see are six metrics. The world has gone mad numerically, many ways. So, that, too, will be discussed. But, with respect to the flow of activity, the topic was motivated by OpenAI's little trick last November. They didn't do the world a whole lot of favors; rather, we will see, in less than two years, just how negative the impact might have been. Now, will subsequent activity on their part relieve some of this.
An adage is apropos: one cannot train out the crap that was trained into a system via machine learning.
At the "Papers" site, we put out an article in May and then followed in the latter months.