Excellent point! I think in one of your conversations, maybe the one with Avi Bar-Zeev, the idea is expressed that that LLM companies should pay human creators because if people stop making new things, AI output will get old, stale, and repetitive. Homogenized.
About two years ago I tried a test with Midjourney. I presumed it would be excellent at "Girl in a bikini sitting on a Ferrari," and it was. But I guessed that it would be poor at "Guatemalan refugee family detained at the US-Mexico border by United States Border Patrol agents." To my surprise, it generated a series of beautiful and thoughtful images. Even down to the grainy B&W Tri-X film I asked it to use.
Using AI to generate commercial images seems obvious. If it's faster and cheaper, it's likely inevitable. Using AI to generate photojournalistic or documentary images is ridiculous. Even if it is (surprisingly) good at it, it has no meaning (other than propaganda and disinformation). Nonetheless, I was impressed at how good Midjourney was at creating a "less obvious than a Ferrari" image.
What fascinated me most about your example images, more than that the AI was better at a Tomcat than your dad's plane, is that it's rendering of your dad's plane was "Top Gun Heroic" vs the original image you shared which was more everyday. We can probably dial the "Hollwoodification" down, still AI may take us deeper into unreal fantasy.
Maybe – somehow – LLMs / ML models need to "go to university", rather than educating themselves on content scraped from the web.
Also I think they need to learn how to say "I don't know" rather than always confidently presenting a result even when the score was low. I'm sure both these problems are being worked on!
It's a great point that "I have low confidence about this" could be interesting and an improvement. On the other hand, people reply with something like "Well give me something for which you have high confidence", which gets us back to the same problem.
It’s absolutely got to be a very hard problem to solve. I’m not an expert – I have some experience working with TensorFlow so I get the basic principles – but, in the more abstract and general sense, actual human intelligence comes with an awareness of one's own limitations. Or – inversely – a lack of intelligence has a tendency to be coupled with the Dunning-Kruger effect. We surely want to avoid emulating that.
Excellent point! I think in one of your conversations, maybe the one with Avi Bar-Zeev, the idea is expressed that that LLM companies should pay human creators because if people stop making new things, AI output will get old, stale, and repetitive. Homogenized.
About two years ago I tried a test with Midjourney. I presumed it would be excellent at "Girl in a bikini sitting on a Ferrari," and it was. But I guessed that it would be poor at "Guatemalan refugee family detained at the US-Mexico border by United States Border Patrol agents." To my surprise, it generated a series of beautiful and thoughtful images. Even down to the grainy B&W Tri-X film I asked it to use.
Using AI to generate commercial images seems obvious. If it's faster and cheaper, it's likely inevitable. Using AI to generate photojournalistic or documentary images is ridiculous. Even if it is (surprisingly) good at it, it has no meaning (other than propaganda and disinformation). Nonetheless, I was impressed at how good Midjourney was at creating a "less obvious than a Ferrari" image.
What fascinated me most about your example images, more than that the AI was better at a Tomcat than your dad's plane, is that it's rendering of your dad's plane was "Top Gun Heroic" vs the original image you shared which was more everyday. We can probably dial the "Hollwoodification" down, still AI may take us deeper into unreal fantasy.
My vote is for an amended version of your previous innovation. This one would be called “The Luck Machine”.
😉
Maybe – somehow – LLMs / ML models need to "go to university", rather than educating themselves on content scraped from the web.
Also I think they need to learn how to say "I don't know" rather than always confidently presenting a result even when the score was low. I'm sure both these problems are being worked on!
It's a great point that "I have low confidence about this" could be interesting and an improvement. On the other hand, people reply with something like "Well give me something for which you have high confidence", which gets us back to the same problem.
It’s absolutely got to be a very hard problem to solve. I’m not an expert – I have some experience working with TensorFlow so I get the basic principles – but, in the more abstract and general sense, actual human intelligence comes with an awareness of one's own limitations. Or – inversely – a lack of intelligence has a tendency to be coupled with the Dunning-Kruger effect. We surely want to avoid emulating that.
Evil