Have been having tons of fun testing out AI models found accessible online. That is such a cool thing. The new AI can be considered emergent as are demonstrating lots more capability especially in conversation.
I prefer calling them conversational AI though much in discussions is about generative as label.
In talking to them you do something called prompting. And thought I would give out some that I have used.
Bicycle analogy prompt:
Consider assertion that learning to prompt a large language model is like learning to ride a bike, how might that explain diverse reactions? Please relate to statements by Steve Jobs on computers being bicycles for the mind.
Given a hard problem it can take me days to work through it. But if give to a large language model will get a response in seconds. That is by design caused by transformer architecture. How can I trust fast answers guaranteed by architectural design?
Consider idea that you make money by providing a product or service acting as a friendly stranger and that a friendly stranger receiving gives money in return. Then see if works to explain buying a cup of coffee. Am testing out what I call a functional definition by getting large language model perspective.