Wednesday, November 29, 2023

Science and AI

TL;DR -- Wherein we look at an old problem which has new twists due to technology and its new ways. Media and news morph information severely.  

--

Now, I did not use AIn't in the title because I must refer to what might be called the real "AI" which would be a tool for all of us, including science. Now, "science" has a lot of meanings. We are being all-inclusive. Many think the "hard" types are the top-dog. That drives the STEM focus. But, there are the social sciences which deal with people issues. There is the medical science. We can have a "holy" science that could be discussed. The range covers the whole of humanity and the lives involved. 

Lately, computer science came along which is still being defined. AIn't, partly, might be attributable to issues there. But, mathematics, itself, needs attention as it swerved to mainly being quantitative in focus. Some of this goes back to computing's growth over the past century and its becoming useful through time so as to be everywhere now, via the cloud. We will discuss, later, the concept of qualitative means required by mathematics. The "pure" aspects of the discipline might be invoked, but we are talking other issues that technology will bring to bear. 

Recently, I discussed AI in the context of quantum mechanics (QM). I'll explain more as we go along, but the gist of the conversation was the difference between an overview versus being in touch with the specifics. The former is the state that anyone not involved with a discipline can attain without additional effort which was not really possible in the past. Now, with the cloud (and Wikipedia), one can read on any subject. Now, of course, AIn't's rise make things murky. What can you believe now? 

If I say, not much, that is a statement that was true in the past. But, now? That "not much" would have to be change to "very little" (can be believed) perhaps even "nothing". Okay? Things are dire. We all have to be exceedingly careful and observant. 

Wait, Wikipedia itself? Well, every page there has a history. We must use that facility. And, all changes are tracked with respect to time, editor, and difference in content. This is so back to the beginning of the page. Other sites offer similar means for determining status and history. In general, going forward, we need markers and more (truth engineering will be the topic for this discussion). 

---

So to the theme of the post. The friend showed me [a print of] an article that had appeared in the latest The Economist. Here is another article that quoted The Economist: A New Way To Predict Ship-Killing Rogue Waves. Within this feed, there is a link to the article (requires payment). The article had been marked at the points where the author of The Economist article raved on about AI and the way that this example solved problems beyond the imagination of humans (my paraphrase). Not as a retort, but in the spirit of debate, I marked [the article] where there were words about "mathematical routines" and the use of other techniques to check results of the AI (neural network) approach. Another approach used was of the evolutionary programming type which we have seen used in production. 

Of course, at the end of The Economist's article, there were the words "could" and "should" which are handwaving. The article did not go as far as some modern one have done where it exults of some accomplishment and its promise. Then, at the end, the article (probably forced by the editor) puts in words about this and that and the other thing (my words and emphasis) all being required as, essentially, the thing does not work as the glowing report might have suggested. 

---

In my usual manner, I went to look at the situation. 

Disclosure: The following recognizes the excellent work in this example. The intent is merely to demonstrate what is always a problem: transforming information into other states, faithfully. News and media face this all of the time; modern times seem to be allowing more laxity with its consequences. 

An irony: Perhaps, AI (in a real sense which we have not seen yet) could help hone messages to be more truthful in the transforms. Let's table that, for now. 

The researcher gave a talk at the National Academy of Sciences about legitimate research. And, as is becoming more imperative, he placed his data and the code on GitHub. Also, thanks to the cloud (it has its good points), we can find records for him on Google Scholar, GitHub (repository for code and more where he put his experimental code), and more. 

But, someone at The Economist reported. Or, they read some abstract. 

We, on the other hand, can look at links with supporting information. 

1. The data issues. One commenter touted that there are 300 years of data from an old science. As in all cases, the new approach is starting from the "state of the art" developed by humans and their methods. 


2. This is the paper which was quoted by The Economist and others. It can be found at ARXIV. And, the paper only mentions AI cursorily.   


Abstracts are everywhere, as we find nowadays: NIH; Google Scholar; ... 



3. The code for the experiments that are reported in the paper and the related data are available at GitHub. This type of disclosure is becoming an imperative for several reasons which we will discuss. Now, one bit of irony is that GitHub has piloted the "pilot" mode which has been going on for awhile where people use xNN/LLM to work code. We will look at that process in a later post. 


----

Now, this is an example of science using computing and doing experiments related to analyzing data. It is only one example of lots of work being done that is legit. Those efforts need to be brought to attention and recognized. Lots of shuffling up goes on, much under the guise of feeds. 

But, with the AIn't and its activities coming into play, how do we know legit from not? That is one of the themes that will be of importance in the future with regard to technology in general. One might say that this type of work is what the internet was created for. 

Now, using "collegial" for the former times and their ways, even then there was need for "peer" review and other scrutiny. But, the spirit of the times stressed truthful work and efforts at promoting proper communcation. 

Background processes (there are many others beyond AIn't) always were problematic. The lesson from the mobile phones and their apps brought that to bear. 

Remarks: Modified: 12/22/2023

11/30/2023 -- Minor corrections. 

12/22/2023 -- THE FUTURE OF AI IN SCIENCE AND MEDICINE, talk at Gairdner Foundation, Oct 25, 2023. 

No comments:

Post a Comment