So apparently my asterix won last challenge and now i gotta make one of my own. Damn

Formatting is shamelessly stolen from @Anahkiasen@lemmy.blahaj.zone, thank you for your hard work :)

Theme

This week’s theme is “Sustainable Ecumenopolis”. I wanted to have something original, that i never seen before. Also ecological Trantor would be based AF.

Rules

  1. Follow the community’s rules above all else

  2. One comment and image per user

  3. Embed image directly in the post (no external link)

  4. Workflow/Prompt sharing encouraged (we’re all here for fun)

  5. At the end of the week each post will be scored according to the following grid

    Prize Points
    Most upvoted +3 points
    Second most upvoted +1 point
    Theme is clear +1 point
    OP’s favorite (me, this week) +1 point
    Most original +1 point
    Last entry (to compensate for less time to vote) +1 point
    Prompt and workflow included +1 point
  6. Posts that are ex aequo will both get the points

  7. Winner gets to pick next theme! Good luck everyone and have fun!

Past entries

  1. Dieselpunk
  2. Goosebump Book
  3. Deep Space Wonders
  4. Fairy Tales
  5. A New Sport
  6. Monsters are Back to School
  7. War and Peace
  8. Distant lands
  9. Unreal Cartoons

Here’s my generation info :

A planet city with buildings entirely made of tree buildings,green high rise, very high density
Negative prompt: concrete,BadDream, bulidng blocks,forest
Steps: 20, Sampler: DPM++ 2M SDE Heun Karras, CFG scale: 7, Seed: 210053647, Size: 512x512, Model hash: 3c8530cb22, Model: cyberrealistic_v33, VAE hash: c6a580b13a, VAE: vae-ft-mse-840000-ema-pruned.ckpt, RNG: CPU, TI hashes: "BadDream: 758aac443515"

Good luck and have fun !

  • Itrytoblenderrender@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Video

    I had to scale the gif down with an online Tool which put me their logo in the bottom right corner.

    Its ok If the use of tools outside of an classic AI Generator violates the rules of the contest and disqualifies me. I still had much fun trying it out!

    The idea was to “fly” from outer space deeper and deeper into an Ecumenopolis (had to google thatone) until you “arrive” at the final destination deep into the Ecumenopolis, a grass field.

    • AI Tool: Comfyui

    • Model: dreamshaper_5BakedVAE

    • Sampler: euler

    • Scheduler: normal

    • CFG: 7.0

    • Samples: 40

    • Positive Prompt: Its complicated… See below

    • Negative Prompts: bad anatomy, bad proportions, blurry, cloned face, deformed, frame, border, black rectangle, disfigured, duplicate, extra arms, extra fingers, extra limbs, extra legs, fused fingers, gross proportions, long neck, malformed limbs, missing arms, missing legs, mutated hands, mutation, mutilated, morbid, out of frame, poorly drawn hands, poorly drawn face, too many fingers, ugly, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, out of frame, ugly, extra limbs, bad anatomy, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, mutated hands, fused fingers, too many fingers, long neck

    • Negative Embeddings: FastNegativeV2, BAdDream

    The Workflow is also embedded in this image:

    The Workflow uses custom nodes which have to be installed to work

    This is quite an interesting Workflow:

    1. Base Image

    I´ve created a Base Image with following Prompt:

    documentary photography photo of closeup of a single grass blade, futuristic, grenn , ecological, hippie city background, natural lighting, from below/low angle, ARRI ALEXA 65, Kodak Vision3 IMAX

    1. Send the created image back into the workflow

    Via the image Sender and image receiver node of the Comfyui custom Module “Impact pack” you are able to send the generated image back into the workflow. With these nodes you are able to do a Generation loop which makes it possible to generate something like an animation by the creative use of inpainting.

    1. Scale the genrated image down an pad it for outpainting

    The previously generated image isscaled down to 50%. then we pad the image to get it back to its original size for the outpainting. combined with an outpainting controlnet you get a new image which “zooms” a bit out:

    1. Now starts the Fun

    You have in Comfyui an optional Setting where you can set the Image generation to an endless loop. It generates image after image until you say stop. As we have generated a feedback loop in our workflow we get the nice zoom effect for every following image.

    And here is the point from above with the prompt “It´s complicated”

    You can now modify the promp while the generation process is running and alter slowly the prompt. If you are a fast typer you can do this while the process is running or you pause, modify the prompt and start the process again.

    My goal was to have a “Zoom in from Space” into the Ecumenopolis.

    As the process works in the way that you zoom out from the first image, you have to think in “reverse” and modify the prompt gradually during the generation process so that you come frome the detail view of grass to the outside view of the Ecumenopolis from Space.

    Surprisingly you only need a few iterations to get a nice effect:

    1. Make a Video

    The last step is to Generate the Animation with the Zoom Video Composer . This will generate you a .mp4 out of your single images. You also have many parameters to play with and to get different effects.

    • imaqtpie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 year ago

      Wow that’s really trippy. Making me dizzy lol 😵

      Thanks for sharing the detailed workflow. It’s honestly mind-boggling to me that it’s possible for one person to create this in their spare time. AI is such a powerful tool that can be utilized in so many different contexts, and we are still just scratching the surface of what is possible.

        • Thelsim@sh.itjust.worksM
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          I see it now. That’s a cool video and as far as I’m concerned it’s well within the rules of the challenge.
          The shift in orientation was a bit disorientating (vertical buildings become horizontal ones, that kind of thing) but it’s really interesting.
          And I find your workflow very impressive, puts my prompt guessing to shame :)