AI and science and art

Discussions about serious topics, for serious people
IvanV
Stummy Beige
Posts: 2847
Joined: Mon May 17, 2021 11:12 am

Re: AI and science and art

Post by IvanV » Wed May 22, 2024 9:11 am

dyqik wrote:
Tue May 21, 2024 6:37 pm
AI, of any variety, doesn't postulate new models based on conceptual understanding. It can only form patterns from it's existing concepts.

This makes it useless for science.
monkey wrote:
Tue May 21, 2024 7:13 pm
They've tried getting of AI to do physics at least once. I remember this story from a couple of years back - clicky.

They got a neural net to model dynamic systems like a double pendulums of various types. The neural net came up with models that worked - it identified patterns, worked out how to describe them in maths, and made good predictions.

The trouble was, there was no explanation of what the variables mean, and in meat physics variables have meaning so's you know what's going on. So the researchers had no idea if what was is doing is useful or not. All they knew is that it was doing things differently to how a meat physicist would do it.

I think this was one of Dyqik's points.

(link to actual paper - clicky)
That's very interesting. As I read it, tt came up with state variables that we couldn't understand. So it had located things that were not within its existing concept set. Since it's existing concept set was supplied to it by us. But maybe dyqik would interpret that differently.

In a sense, this is what we need, something that is not hidebound by our lived experience, and so can come up with descriptions we overlook due to the inevitable narrowing of view that comes with that.

Clearly there is a problem when we can't actually unravel what its state variables are. They are doubtless some curious composite that is hard to untangle. It isn't actually useful.

I've had to examine some AI data analysis forecasting model recently, and it was non-transparent in a similar way. And researching it, this is just what this class of AI data analysis model does. It's doubtless great if you don't care how it makes its forecast. But we do care. It's no use that it combines the input variables in highly contingent ways that leaves no transparency on how it came to its forecast. We do actually need to know what effect certain parameters were having. We do need to know that its methods are consistent with the known laws of physics, etc. Whilst it is doubtless to some extent true that various things are contingent on everything else, i practical reality it in many cases it was implausible that it they were as highly contingent on everything else as it appeared to indicate. You ought to be able to say within quite narrow bounds what effect these factors are having, everything else held constant.

But whilst these do suffer from lack of transparency, I gain from this some sort of partial optimism that maybe with time and experience we could devise and train them to produce more useful outputs, by helping them to recognise the difference.

User avatar
dyqik
Princess POW
Posts: 7670
Joined: Wed Sep 25, 2019 4:19 pm
Location: Masshole
Contact:

Re: AI and science and art

Post by dyqik » Wed May 22, 2024 10:32 am

IvanV wrote:
Wed May 22, 2024 9:11 am
dyqik wrote:
Tue May 21, 2024 6:37 pm
AI, of any variety, doesn't postulate new models based on conceptual understanding. It can only form patterns from it's existing concepts.

This makes it useless for science.
monkey wrote:
Tue May 21, 2024 7:13 pm
They've tried getting of AI to do physics at least once. I remember this story from a couple of years back - clicky.

They got a neural net to model dynamic systems like a double pendulums of various types. The neural net came up with models that worked - it identified patterns, worked out how to describe them in maths, and made good predictions.

The trouble was, there was no explanation of what the variables mean, and in meat physics variables have meaning so's you know what's going on. So the researchers had no idea if what was is doing is useful or not. All they knew is that it was doing things differently to how a meat physicist would do it.

I think this was one of Dyqik's points.

(link to actual paper - clicky)
That's very interesting. As I read it, tt came up with state variables that we couldn't understand. So it had located things that were not within its existing concept set. Since it's existing concept set was supplied to it by us. But maybe dyqik would interpret that differently.
It came up with meaningless variables, meaning that it's just doing epicycles, rather than physics. This is not science.

IvanV
Stummy Beige
Posts: 2847
Joined: Mon May 17, 2021 11:12 am

Re: AI and science and art

Post by IvanV » Wed May 22, 2024 11:19 am

dyqik wrote:
Wed May 22, 2024 10:32 am
It came up with meaningless variables, meaning that it's just doing epicycles, rather than physics. This is not science.
To be useful in this space, I don't think AI necessarily needs to be able "do physics", and I have no expectation of AI being able to "think". It is rather a calculation aid. What AI programs can do is search through large numbers of options and find the one that best means the criteria that we set it. This is what it does as it spanks us at games like chess. It's not thinking, it is just performing a mathematical optimisation.

As you say, it comes up with meaningless convoluted variables, and your characterisation of them as being like just like epicycles is very appropriate. But we can set objectives, and it is the function of neuro-linguistic program can look for "solutions" that better meet objectives. If we can somehow indicate to the NLP some kind of criteria for "meaningfulness" in a solution, it might be able to search through potential solution spaces to find solutions that are more meaningful to us, and are ones that we have overlooked. We somehow need to be able to deprecate epicycle-like solutions in its search criteria. Clearly there are something like convergence criteria in such things, and whether there is some kind of impediment there, I don't know.

There are these AI data analysis systems out there, and having now come up against people using them. I can understand that they are often useful to people who just want a pretty good prediction and don't care very much how it works. For example, the people doing the stuff that chooses what website content to present to us. And for the games-playing engines, we have no expectation that we can deduce playing rules from them. But for those of us who need to understand what is influencing what, these non-transparent calculators are pretty useless. So I hope that the people who devise these things will recognise this shortcoming and try to find systems that are more useful in these situations, there would appear to be a desire for it. But maybe

I'm reminded of an earlier phase of the development of backgammon engines. Whilst the engines were already, in most circumstances, playing better backgammon than any living player, some people discovered that they could beat the computer by tempting it into weird positions that never occurred in human games. You couldn't do that with human opponents, because to even moderately competent human players they were such weird and bad positions that they strenuously avoided going anywhere like that. Lacking such human prejudices, and no prior experience of those positions, because they never occurred in its training set, the computer's self-determined position evaluation function misevaluated them as good positions, due to certain features that in broader cases are advantageous. As we can see, the computer couldn't "think" and see what was obvious to an even moderately competent backgammon player. But as soon as we included a few of the weird positions like that in its training set, it quickly adjusted its position evaluation algorithms to recognise them as highly undesirable positions, and it then avoided them as strenuously as humans did. And now computers spank us at backgammon, and we know no tactics for beating them, short of weighting the dice. It's not quite the same thing, but it indicates how we can improve the optimisation procedure for the computers to be able to better optimise.

User avatar
dyqik
Princess POW
Posts: 7670
Joined: Wed Sep 25, 2019 4:19 pm
Location: Masshole
Contact:

Re: AI and science and art

Post by dyqik » Wed May 22, 2024 11:31 am

There's lots of older statistical tools that do similar things - principal component analysis, for example. These are used to explore data, just as machine learning techniques are now.

But what they are used for is data exploration, not the scientific hypothesis and test process, which is what you suggested AI could do.

IvanV
Stummy Beige
Posts: 2847
Joined: Mon May 17, 2021 11:12 am

Re: AI and science and art

Post by IvanV » Wed May 22, 2024 12:32 pm

dyqik wrote:
Wed May 22, 2024 11:31 am
There's lots of older statistical tools that do similar things - principal component analysis, for example. These are used to explore data, just as machine learning techniques are now.

But what they are used for is data exploration, not the scientific hypothesis and test process, which is what you suggested AI could do.
I have read through my posts again, and I don't see where you got that thought from. I have suggested only that it could search for patterns in data.

Yes, there are older statistical tools that do that, but the world moves on and we look for better mathematical tools.

I started from suggesting that AI might look for patterns in data, find a mathematical description that underlies such patterns. We have discovered it can do that, but currently in non-transparent ways that are not useful. Such non-transparency is unfortunately a common feature of how present AI programs explore data. I suggested we could perhaps prompt it to look for more transparent descriptions.

User avatar
bjn
Stummy Beige
Posts: 2970
Joined: Wed Sep 25, 2019 4:58 pm
Location: London

Re: AI and science and art

Post by bjn » Wed May 22, 2024 9:02 pm

dyqik wrote:
Wed May 22, 2024 11:31 am
There's lots of older statistical tools that do similar things - principal component analysis, for example. These are used to explore data, just as machine learning techniques are now.

But what they are used for is data exploration, not the scientific hypothesis and test process, which is what you suggested AI could do.
PCA is a core machine learning technique. Used for all sorts.

/reply-guy

IvanV
Stummy Beige
Posts: 2847
Joined: Mon May 17, 2021 11:12 am

Re: AI and science and art

Post by IvanV » Thu May 30, 2024 11:24 am

A lecture to be given later this year in Oxford by Terence Tao (mathematician, Fields Medal, at UCLA) on the uses of AI in science and maths was just advertised to me. So I googled what he might have been up to in that space. I find he sits on the President's Council of Advisors on Science and Technology (PCAST).

PCAST published a report on the uses of AI to advance science recently. I haven't looked through the report itself yet. But the press release is saying similar things to what I said - look for patterns in large amounts of data, identify candidate solutions to pressing research problems, a tool for, rather than replacement of, scientists. But we know what press releases are like and maybe someone wants to evaluate the full report. At the same time, as I said above, I remain painfully aware that the AI data analysis I have seen is not very useful for these purposes. It is me wittering from a position of ignorance when I suggest that it seems possible that models can be trained in what is "useful" output, and head towards that. I can kind of imagine that is possible, but maybe an expert can tell me otherwise.

Post Reply