Krita with Free AI Diffusion: When Photoshop Doesn’t Pay Off

Krita: Open source graphics editor with AI support

The rise of AI models for content generation has been meteoric. Practical scenarios for using AI in everyday work are emerging alongside sample applications. Highlights include online services or Photoshop’s AI tools for advanced photo editing. However, not everyone wants to pay a subscription fee. We’ll explore what the open-source graphics editor Krita lets users achieve with free local AI models and how to run them on a gaming graphics card.

This article was made possible with financial support from NVIDIA. They did not alter the content of this article; it is based on our findings and reflects the author’s views.

Generative models such as GPT, DALL-E or Stable Diffusion are demanding in terms of computing performance. One of the useful tools to achieve greater efficiency when working with AI models is the NVIDIA TensorRT framework designed to accelerate and optimize deep neural network inference.

TensorRT helps generative models run faster and with lower demands on hardware. This is achieved, for example, by merging model layers, optimizing computational graphics, or using mixed precision (e.g. FP16 or INT8). This reduces latency and memory usage, which is advantageous when deploying models on servers, but also at home on conventional graphics cards.

In practice, TensorRT is used, for example, to process text or image generative models that can be deployed in the cloud as well as on local or portable devices. Its support contributes to the efficient operation of applications without significantly increasing hardware costs.

TensorRT is compatible with frameworks such as PyTorch and TensorFlow, which makes it easy to deploy optimized models, so it is easier for developers and enthusiasts to incorporate optimized models into their projects and offer access to advanced generative AI to the general public and work with them even on mainstream graphics cards in home environments.

We’ll use the popular open source graphics editor Krita and its AI Diffusion add-on, which connects the program itself to the ComfyUI platform, a graphical interface for working with AI models to create content.

Krita: open-source alternative for advanced image editing for free

Krita is a free, open source, multiplatform application. It is popular mainly among digital artists, but also offers a variety of tools that are useful for photo and image editing. The combination of advanced features and friendly user interface makes it an interesting choice for those who are looking for efficient software and don’t want to invest in paid programs.

Created in 1998 as part of the KOffice package, its original intention was to compete with professional image editing applications such as Photoshop or GIMP. Over time, however, the developers have focused their efforts on the needs of digital artists, as this segment has been largely neglected in the world of open-source software.

Its interface and features are optimized specifically for illustrations, concepts and other art techniques. Although by default it offers less advanced tools for photo editing than the as-yet-unsurpassed Photoshop or the simpler Affinity Photo, it can also be used for photo editing.

Krita allows the workspace to be adapted to the user’s needs. You can set the layout of the panels and tools to suit your workflow when working on photographs. It can also handle high-resolution bitmap images, which is ideal for photographers who are into detailed editing. Plus, it includes tools that lend themselves to more creative uses, such as repainting photos with brushes or creating graphical elements directly from photos.

The application also supports layers with transparent masks so you can work on individual parts of the image separately. It also supports adjustment layers or working with external objects. So you can make non-destructive adjustments and easily revert to the original version of the photo, tweak colors, brightness, curves, levels, and other document properties without affecting the rest of the image, thanks to intuitive sliders and advanced color management features. Support for different color spaces (including HDR) will be especially appreciated by those working with professional formats.

It also has extensive filter support – from sharpening and blurring to stylized effects that can add an artistic touch to your photos.

With support for scripting and plug-ins, Krita’s capabilities can be greatly expanded and it can do things Affinity Photo can’t. A particular strength is the support for Python scripting, which allows developers and users to create custom tools and automate various tasks directly in the Krita environment. This is exactly what the add-on for image editing using artificial intelligence Krita AI Diffusion, which I want to focus on today, takes advantage of.

The add-on uses artificial intelligence models such as Stable Diffusion, which have recently become popular for generating and editing images.

The special tools used to run AI models today mostly run as web applications on a local server. However, working with their interface and moving edited images between the web application and the editor is uncomfortable to say the least.

Krita AI Diffusion, on the other hand, is integrated directly into the Krita user interface, where it acts as another set of tools. With the help of AI, you can generate and edit content right inside the document you are working on. One can sense that the intention of the add-on’s author is to get closer to the AI-powered features offered by Photoshop.

To run the AI locally, a graphics card with at least 6 GB of video memory is recommended. The plugin supports NVIDIA graphics cards with CUDA environment, AMD GPUs via DirectML on Windows and ROCm under Linux, and uses MPC on macOS on Apple M1/M2. But you can also use cloud services.

Krita AI Diffusion features

The AI Diffusion add-on can be said to act as an intermediary between Krita and ComfyUI, an open-source generative AI user interface that uses a node-based system to create images, videos and audio. It allows users to design and execute advanced workflows for Stable Diffusion through a graphical interface without the need for programming. It supports various models such as SD1.x, SD2.x, SDXL, and integrates tools such as ControlNet and T2I-Adapter. The application is available for Windows, macOS and Linux.

AI Diffusion itself offers several tools for working with images:

  • Generate – Create new images from scratch based on a text description or an existing image. Default support for SD1.5 and SDXL models.
  • Upscale – Allows upscaling of image resolutions up to 4K or 8K and beyond without memory overload.
  • Inpaint – Allows you to select an area of an image and delete or replace its contents. The generation can be controlled by simple text instructions.
  • Outpaint – Expands the canvas by automatically filling in the blank area so that it seamlessly connects to the existing image.
  • Refine – The ability to fine-tune the content of an existing image using the effect strength slider. Also great for adding new elements to an image using a rough sketch.
  • Live Painting – AI interprets your canvas in real time and provides instant feedback.

In addition, AI Diffusion allows you to control image creation using sketches, lines or maps (depth, normal). You can transfer the positions of figures from images or control composition using segmentation maps.

Works with any image resolution, can automatically adjust the resolution to the AI model requirements.

You can enter the tasks into a queue while working on your project, and it even allows you to cancel image generation. Previously generated images and instructions can be easily browsed in the history.

The default style presets cover basic scenarios for easy control, but also allow you to create custom presets, select Stable Diffusion control points, add LoRA models, edit samplers and more.


  •  
  •  
  •  
Flattr this!

Leave a Reply

Your email address will not be published. Required fields are marked *