I think you can guess that part. I doubt a current LLM can create a valid PNG, even if it’s just a 1x1px one that has been created before. This is partially because PNGs have a checksum and the LLM has definitely not seen enough PNGs in base64 to figure out the algorithm, and is not optimized to calculate checksums. In fact, I analyzed the image and the image header checksum is wrong even though the header makes sense (was likely stolen). Also, it gets penalized for repetition, which occurs a lot in image headers.
AFAIK, the smallest valid image you see mentioned on the web is a 35-byte transparent pixel GIF, and the smallest PNG is a black pixel with 67 bytes:
To be fair, it did spit out a couple of completely nonsensical images afterwards:
I think the AI might be biased against dogs.
did it give you the images in base64 from an llm, or from an image generation model ?
I think you can guess that part. I doubt a current LLM can create a valid PNG, even if it’s just a 1x1px one that has been created before. This is partially because PNGs have a checksum and the LLM has definitely not seen enough PNGs in base64 to figure out the algorithm, and is not optimized to calculate checksums. In fact, I analyzed the image and the image header checksum is wrong even though the header makes sense (was likely stolen). Also, it gets penalized for repetition, which occurs a lot in image headers.
AFAIK, the smallest valid image you see mentioned on the web is a 35-byte transparent pixel GIF, and the smallest PNG is a black pixel with 67 bytes:
Testing rendering: , , another 67-byte PNG but 8 px wide: , or 1 gray pixel: , or a green one:
The article + the generator
Why did it take a whole minute to scroll passed this on my Connect app?
No idea, it only has 1350 bytes now after the edit, and no crazy formatting
I think it was one first and then the other