We get a rather substantial winter break at SUNY Brockport, and I often use this time to do something “different”. This year, I wanted to explore image generation using AI (since everything AI is all the rage right now). Furthermore, I upgraded my computer recently and purchased a high(er) end graphics card. I got to thinking that running AI generation software locally might be a good test of my system, but needed a project. Then, I found this in my inbox:
I tend not to trust NY Times emails (if you trust blocked content, then any ads in the email are shown; however if I go directly to the website, my adblockers do their job). What is interesting is that the alt text for the image seems very AI-like. I wonder why a company focused on writing words needs to use an AI to generate a caption for an image they created (hopefully that is the case). It got me thinking: if they used AI to generate this caption, I would like to see what AI does with those words as a prompt to generate images.
I would not say that getting my computer set up for image generation was trivial; however, the documentation at huggingface has been extremely helpful with clear and concise examples. Knowing a bit of python – and knowing that ChatGPT can generate the code I don’t know, has made creating a simple locally-driven image generation app fairly straightforward.
Anyway, on to the results. I took the caption above and used that to prompt some image generation models. There are a bunch out there, including stable diffusion, open journey, and open dalle. I’ve got a version of each stored locally, and here’s what they came up with.
Continue reading