Offizielle Vorlage

AI art generation tools

A
von @Admin
Kreativität & Hobbys

How do I create AI-generated art with Midjourney, DALL-E, or Stable Diffusion?

Projekt-Plan

17 Aufgaben
1.

{{whyLabel}}: Midjourney is currently the industry leader for aesthetic quality and artistic flair.

{{howLabel}}:

  • Create a Discord account and join the Midjourney server.
  • Subscribe to a plan (Basic or Standard) to gain access to the /imagine command.
  • Use the /settings command to ensure you are using the latest model (e.g., v6.1).

{{doneWhenLabel}}: You have successfully generated your first test image using the /imagine command.

2.

{{whyLabel}}: DALL-E 3 is the most user-friendly tool, excelling at following complex instructions and rendering text.

{{howLabel}}:

  • Log into ChatGPT (Plus/Team/Enterprise) or use the free version via Microsoft Designer's Image Creator.
  • Note that DALL-E 3 uses a 'natural language' interface, meaning you can describe scenes as if talking to a person.

{{doneWhenLabel}}: You have generated an image that includes specific text or a complex scene description.

3.

{{whyLabel}}: Stable Diffusion offers total control and privacy without subscription fees, provided you have a decent GPU.

{{howLabel}}:

  • Download 'Stable Diffusion WebUI Forge' (a faster version of the classic Automatic1111).
  • Ensure you have Python 3.10 and Git installed on your system.
  • If you lack a powerful GPU, use a cloud-based interface like 'Tensor.art' or 'Civitai' to run models for free/cheap.

{{doneWhenLabel}}: The web interface (usually at http://127.0.0.1:7860) is open and ready for input.

4.

{{whyLabel}}: A structured prompt yields predictable results across all AI models.

{{howLabel}}:

  • Start with the Subject (e.g., 'A cybernetic owl').
  • Add the Medium (e.g., 'Oil painting', '3D render', 'Polaroid photo').
  • Define the Style/Artist (e.g., 'in the style of Van Gogh' or 'Cyberpunk aesthetic').
  • Add Lighting/Atmosphere (e.g., 'Golden hour', 'Volumetric fog').

{{doneWhenLabel}}: You have written three prompts following this specific structure.

5.

{{whyLabel}}: Lighting is the most effective way to increase the 'professional' look of an AI image.

{{howLabel}}:

  • Use 'Rembrandt lighting' for dramatic portraits.
  • Use 'Rim lighting' to separate a subject from the background.
  • Use 'Bioluminescent' for glowing, sci-fi effects.
  • Experiment with 'Global illumination' for realistic 3D-like depth.

{{doneWhenLabel}}: You have generated a set of 4 images showing the same subject under different lighting conditions.

6.

{{whyLabel}}: AI models understand photography language better than vague descriptive words.

{{howLabel}}:

  • Specify the Lens: '85mm' for portraits, '14mm wide-angle' for landscapes.
  • Specify the Angle: 'Low angle' for heroic shots, 'Bird's eye view' for maps/layouts.
  • Specify Depth of Field: Use 'f/1.8' or 'Bokeh' to blur the background.

{{doneWhenLabel}}: You have successfully forced a specific camera perspective in your output.

7.

{{whyLabel}}: Negative prompts tell the AI what not to include, which is crucial for cleaning up 'AI artifacts' like extra fingers.

{{howLabel}}:

  • In Stable Diffusion, use the dedicated 'Negative Prompt' box.
  • In Midjourney, use the --no parameter (e.g., --no text, blurry, distorted).
  • Common negative terms: 'deformed, extra limbs, low resolution, watermark, signature'.

{{doneWhenLabel}}: You have successfully removed a specific unwanted element from a generation using a negative prompt.

8.

{{whyLabel}}: Parameters allow you to control the technical output without changing your descriptive prompt.

{{howLabel}}:

  • --ar 16:9: Change the aspect ratio for cinematic shots.
  • --stylize 250: Adjust how much of Midjourney's default 'artistic' training is applied (0-1000).
  • --chaos 10: Increase variety between the 4 initial grid images (0-100).
  • --v 6.1: Ensure you are using the latest high-detail model.

{{doneWhenLabel}}: You have generated a 16:9 image with a high stylize value.

9.

{{whyLabel}}: Keeping a character the same across multiple images is the 'Holy Grail' of AI art.

{{howLabel}}:

  • Generate a character you like and copy the image URL.
  • In your next prompt, add --cref [URL].
  • Use --cw 100 to copy the whole character (face + clothes) or --cw 0 to copy only the face.

{{doneWhenLabel}}: You have two images of the same character in two different locations.

10.

{{whyLabel}}: The base Stable Diffusion model is generic; 'Checkpoints' are fine-tuned models for specific styles (Anime, Realism, 3D).

{{howLabel}}:

  • Visit Civitai.com and filter by 'Checkpoint'.
  • Download a popular model like 'Juggernaut XL' (Realism) or 'Pony Diffusion' (Stylized).
  • Place the file in stable-diffusion-webui/models/Stable-diffusion and refresh your UI.

{{doneWhenLabel}}: You have generated an image using a non-default, downloaded model.

11.

{{whyLabel}}: LoRAs are small 'add-on' files that teach the AI specific characters, clothing, or art styles without changing the whole model.

{{howLabel}}:

  • Download a LoRA from Civitai (e.g., 'Cyberpunk Edgewear' or a specific celebrity/character).
  • Place it in models/Lora.
  • Activate it in your prompt using the syntax <lora:filename:0.8> (where 0.8 is the strength).

{{doneWhenLabel}}: You have successfully applied a specific LoRA style to a generation.

12.

{{whyLabel}}: AI often fails at small details; Inpainting allows you to redraw only the 'broken' parts.

{{howLabel}}:

  • In Stable Diffusion, go to the 'img2img' -> 'Inpaint' tab.
  • Use the brush to mask the face or hand you want to fix.
  • Set 'Denoising strength' to 0.4 - 0.6 and hit generate.
  • In Midjourney, use the 'Vary (Region)' button after upscaling an image.

{{doneWhenLabel}}: You have successfully fixed a distorted face or hand in an existing image.

13.

{{whyLabel}}: Outpainting allows you to change the aspect ratio or add more background to a cramped image.

{{howLabel}}:

  • Use Midjourney's 'Pan' buttons or 'Zoom Out' feature.
  • In Stable Diffusion, use the 'ControlNet' extension with the 'Inpaint' model or the 'Poor Man's Outpainting' script.
  • Describe the new surroundings in the prompt while keeping the original subject.

{{doneWhenLabel}}: You have expanded a square image into a landscape format without losing the original content.

14.

{{whyLabel}}: Native AI outputs are usually low resolution (approx. 1024x1024). Upscaling adds the necessary detail for professional use.

{{howLabel}}:

  • Use 'Topaz Photo AI' (Paid) or the free 'Upscayl' (Open Source) desktop app.
  • In Stable Diffusion, use 'Extras' -> 'Upscaler' (choose R-ESRGAN 4x+).
  • In Midjourney, use the 'Upscale (Subtle)' or 'Upscale (Creative)' buttons.

{{doneWhenLabel}}: You have a final image file with at least 4000px on the longest side.

15.

{{whyLabel}}: Moving from random images to a 'series' builds a professional portfolio.

{{howLabel}}:

  • Choose a theme (e.g., 'The Last Library on Mars' or 'Neo-Victorian Deep Sea Explorers').
  • Write a 'Master Prompt' that defines the consistent style, color palette, and lighting.

{{doneWhenLabel}}: You have a written plan for 5 distinct scenes within one theme.

16.

{{whyLabel}}: This is the practical application of prompting, inpainting, and consistency techniques.

{{howLabel}}:

  • Generate the core images.
  • Use Inpainting to fix any errors.
  • Use the same seed or --cref to ensure visual continuity.
  • Upscale all 5 images to high resolution.

{{doneWhenLabel}}: You have 5 high-quality, consistent images ready for display.

17.

{{whyLabel}}: AI art is rarely perfect 'out of the box'; manual color grading and cropping add the final 10% of quality.

{{howLabel}}:

  • Open your images in a tool like GIMP or Krita (Free/Open Source).
  • Adjust 'Levels' and 'Curves' to improve contrast.
  • Apply a subtle 'Grain' filter to hide AI-smoothness and make it look more like traditional media.

{{doneWhenLabel}}: Your 5 images are exported as high-quality JPEGs or PNGs.

0
0

Diskussion

Melde dich an, um an der Diskussion teilzunehmen.

Lade Kommentare...