2024
May
24
Experimenting with Generative AI
I've long generated images using 3D modeling and rendering. Some are really good, almost photorealistic, others are OK, and some suck. The quality depends on the tool I used and how much effort I put into it. A lot of the early ones made with Daz 3D are just terrible. Newer stuff is better. The better stuff usually came from 3DS Max, which I got quite used to, but its insanely expensive and not available for Linux, upon which I do almost all my work, so for 3D modeling I've switched to Blender, which is free and works anywhere.
Of course now we have a new method: generative AI. That didn't exist when I started this site, and still has a way to go, but it produces some first-class realistic images. And some crap. Here are a few of the images I tried just to see how it goes.
So, that leaves me three ways to generate images for my novels, for covers, publicity, and general edification:
- Licensed images
- 3D modeling and rendering
- Generative AI
And all three have issues.
Licensing images can be expensive and usually have restrictions on how you can use them, like on one web site and in a single publication of up to 100 copies. After that, you need another license. My favorite source was CanStockPhoto, but they shut down. Alas! Even if it weren't for the cost, you can spend hours or even days going through gallery after gallery looking just the right degree of visual excitement. Three hundred space stations so far, and not one of them is right.
Likewise, 3D modeling can eat up a lot of time. That's why some of my renderings are crap. I didn't have the hours to spend on one. And the learning curve is pretty steep. You have to learn all about editing meshes, modifiers, coordinate mapping, materials, which include things like anisotropy, ambient occlusion, light sources.... You get the idea. Daz 3D is a notable exception in that you can put together a character without too much ado and do a quick render. If you want your character to actually be in a setting, that's often another piece of software. More hours get what you want.
Generative AI can produce realistic images quite quickly, but tweaking it to what you want can be tricky. You have to get the phrasing just right and there is still no guarantee. The techie solution is to train your own neural network models. That's well within the capability of home users with the right hardware, but again you're looking at hours or days to accomplish that. On of all that, getting the same spaceship twice is less likely than getting struck by lighting. You need 3D modeling for that.
There is a fourth option, but it's realistically closed to me. Except for drawing anime, my artistic skills are nowhere good enough to paint a realistic scene, and even if they were, that's more hours or days.
There you have it. A choice of methods, all of which require a significant investment in your valuable time. Here's hoping that AI improves enough to change the equation here.
At this stage in re-writing my web site, comments are not functional yet. That's too bad because I'd really like to hear from others on this.
Comments
There are no comments for this post.
You must be logged in to post a comment.