Magnific AI Relight is Worse than Open Source

5,647
0
Published 2024-07-08
Try RunComfy and run this workflow on the Cloud without any installation needed, with lightning fast GPUs!
Visit www.runcomfy.com/?ref=AndreaBaioni , and get 10% off for GPU time or subscriptions with the Coupon below.

REDEMPTION INSTRUCTIONS: Sign in to RunComfy → Click your profile at the top right → Select Redeem a coupon.

COUPON CODE: RCABP10 (Expires July 31)

Workflow (RunComfy): www.runcomfy.com/comfyui-workflows/comfyui-product…

Workflow (Local): openart.ai/workflows/nSqO2P2ZmDQGwohEbgl3

Want to support me? You can buy me a coffee here: ko-fi.com/risunobushi

Relight better than Magnific AI and for free, locally or on the cloud via RunComfy!

(install the missing nodes via comfyUI manager, or use:)

IC-Light comfyUI github: github.com/kijai/ComfyUI-IC-Light
IC-Light model (fc only, no need to use the fbc model): huggingface.co/lllyasviel/ic-light/tree/main

GroundingDinoSAMSegment: github.com/storyicon/comfyui_segment_anything
SAM models: found in the same SAM github above.

Model: most 1.5 models, I'm using epicRealism civitai.com/models/25694/epicrealism?modelVersionI…

Auxiliary controlNet nodes: github.com/Fannovel16/comfyui_controlnet_aux

IPAdapter Plus: github.com/cubiq/ComfyUI_IPAdapter_plus

Timestamps:
00:00 - Intro
01:10 - Workflow (Local)
03:50 - Magnific vs Mine (First Test, global illumination)
04:31 - Magnific vs Mine (Second Test, custom light mask)
06:45 - Workflow (Cloud, RunComfy)
16:22 - Workflow Deep Dive (How it works)
19:15 - Outro

#magnificai #stablediffusion #comfyui #comfyuitutorial #relight #iclight

All Comments (21)
  • @risunobushi_ai
    Try RunComfy and run this workflow on the Cloud without any installation needed, with lightning fast GPUs! Visit www.runcomfy.com/?ref=AndreaBaioni , and get 10% off for GPU time or subscriptions with the Coupon below. REDEMPTION INSTRUCTIONS: Sign in to RunComfy → Click your profile at the top right → Select Redeem a coupon. COUPON CODE: RCABP10 (Expires July 31) Workflow (RunComfy): www.runcomfy.com/comfyui-workflows/comfyui-product… Workflow (Local): openart.ai/workflows/nSqO2P2ZmDQGwohEbgl3
  • Dropping the mic. Love to see a simplified UI for these workflows. That is the biggest selling point of the paid platforms - the convenience. Great showcase as usual Andrea.
  • Feel lucky knowing your channel. It is really painful that credits for this amazing work goes to someone else and he is even making money out of it. On top of that, they are not doing it as good as you did. And I think they will improve their product after watching your video without even crediting you or anyone involved in the open source community. Thank you for all the work you do.
  • @ted328
    This channel is a gift to creatives and artists everywhere. Can't thank you enough.
  • @johntnguyen1976
    Wwonderful! Your channel keeps getting more and more useful by the day (and you were useful from day one)
  • @wholeness
    Nice! Now is there a local Magnific/Krea upscaler flow that can produce similar results that you know of? This would be a video we are all looking for!
  • @TheRoomcleaner
    Love the salt. Video was hilarious and informative 👍
  • Just a suggestion. I created a couple of product shots and saw that the edges are not always perfect. The mask is good but I realized that it generates in example a bottle that is slightly bigger than the product and if it is blended you can see the edges of the original generation that is lying under it. Would it be possible to put a "Lama Inpaint" node to remove the generated bottle and create a cleanplate and only after that to paste or blend the product photo into the generation? Hope it makes sense. Will try it myself but I just begun to work with comfyui yesterday. LOL
  • @neoneil9377
    Thanks for this amazing video, this is best ai content channel for professionals. Just one question. Does re-light support SDXL yet? Thanks in advance.
  • @PaulVang-vf7fm
    Is this Ilyia person a real person? they've made literally 90% of all stable diffusion extensions/apps. Dude has coding super powers.
  • how to make it generate a new background based on prompt and make sure it's suitable for the foreground?
  • @denisquarte7177
    Nice, came across your workflow last weekend and was about to experiement with some things. eg. you didn't subtract low freq from the image but instead added inverted with 50%. Still not sure why though. But before I tinker needlessly I take a look what you have already cooked. Thanks a lot for sharing and as well to the people helping you out developing this.
  • @mikelaing8001
    i tried magnific and it was terrible, wondered if i'd missed something tbh.
  • @vincema4018
    One question: If I want to upscale the output image, should I insert the ultimate SD upscaler before or after the frequency seperation and color matching notes or after them?
  • @MaghrabyANO
    Another question, in the Regenartor Box, you get 2 results, right? one of them is AI-generated, and the other using the object image to be masked over the AI-generated object. the masked/overlayed object is usually pixlated for me. Im not sure why, but maybe because the input object resolution is 768x512 and the generated outccome is 1536x1024, so probably the image got stretched and pixlated. So how to sustain the same Image size as the object? no resizing needed
  • @yuvish00
    Hi Andrea Great workflow! I tested it with a bottle of perfume and the forest background and the result not so good. Meaning, the size of the perfume with respect to the forest background was not proportional. Any suggestions how to improve this? Thanks!