I had more discussions about large language models during lunch today. I heard discussions at tables we passed by on our way out. (The pizza we had for lunch was great by the way!) I am listening to the latest episode of Econtalk discussing the more worried perspective of where things may be going. It is the perspective that we may soon have created something more intelligent than we are and that we need to think about ways to control such things before it is too late.
It struck me that part of the alarmist line of reasoning seems to be this:
- We do not fully understand our own intelligence
- We also do not fully understand how the large language models produce their results
- Therefore large language models are intelligent
- …
- Apparently?
Kristoffer also mentioned this in the episode of Kodsnack we are releasing tomorrow - AI is currently something like religion for a certain kind of tech people.
Hey, why beat around the bush? AI is currently the religion for a certain kind of tech people.
No wonder investor types can get excited - what has more growth potential than something you do not understand? The potential is limitless, as long as everyone else also believes the same thing.
L. Ron Hubbard was way ahead of the curve. Or of this curve, he certainly was not the first.
Me? I still believe there are good uses, but a whole lot more hype which will blow away pretty soon. Hopefully not too many people get fooled, unemployed, mangled, scratched, or otherwise bruised in the process.
It definitely feels like time to stop yelling about AI high and low and see if we actually have something useful going on. We are all just repeating questions and ideas back to each other. I will try to do my part, not least for my own sake.