Modern AI-generated images emerge from distinct generative paradigms — each leaving unique statistical fingerprints in pixel data, texture gradients, and frequency spectra that LunaNet is trained to identify.
01
Generative Adversarial Networks (GANs)
A generator and discriminator network compete in a minimax game. At convergence the generator produces photorealistic outputs — but GAN upsampling operations leave periodic spectral artifacts in the FFT domain, particularly at mid-to-high frequencies. These are a primary detection signal for LunaNet.
02
Diffusion Models (Stable Diffusion, DALL·E)
Diffusion models learn to denoise images from Gaussian noise toward a target distribution. Despite perceptual quality, they produce characteristic pixel-level texture inconsistencies — unnatural skin smoothness, hair regularity, and specular anomalies — detectable through CNN spatial analysis.
03
Variational Autoencoders (VAEs)
VAEs encode images into a compressed latent space and decode sampled vectors into new outputs. They produce slightly blurrier images than GANs, with distinctive low-frequency spectral signatures. Many modern architectures combine VAEs with diffusion in a latent diffusion framework.
04
Face-Swap Deepfakes
Deepfake pipelines transplant facial identity onto a donor frame using encoder-decoder networks. They generate blending artifacts at face boundaries, temporal inconsistencies in blink and lip-sync, and abnormal colour distribution in skin regions — all detectable through LunaNet's spatial CNN stage.