How To Build GIF/Video From Images With Python Pillow/Opencv
In one of our project, we need to build a preview of our generated result, and we decide to use GIF to show the results at the beginning. Here is how we do that.
from PIL import Imagegif = []
images = []#images, just convert it into PIL.Image obj
for image in images:
gif.append(image)gif[0].save('temp_result.gif', save_all=True,optimize=False, append_images=gif[1:], loop=0)
Parameters of Image.save:
append_images #append next images, use [1:] will append whole list
optimize #whether or not optimize the image, default True
loop #default None, if given numbers, the gif will auto loop
duration #the time of each frame last
For a simple 24 frames gif, this code will cost less than 0.5 sec. In this part, main function is image.save(), in pillow, when save an image into gif format, it will convert image from basic format(like RGB) into palette format(“P”), then the program will calculate the difference between each frame and generate a smaller gif, it’s very quick because the calculation was based on P format which has just one channel, instead RGB.
Most of the time, this will work. You can get a gif to should a list of images, you can also use this to generate your own meme, it’s very fast, all you need to do is read images and stick them together.
However, this will come out problem. Such problem occurs when each frame has not too much difference.
Problem 1 : The background is full of noise, and the noise at the bottom right changes every frame. ( as mentioned in https://github.com/python-pillow/Pillow/issues/4263)
Reason: when pillow try to stick frames together, it will convert RGB into P, however this kind of transform require a palette, and if you use gif[0].save(), and save more than one image, it will default using (palette=Image.WEB), which do have good transform of color, but will generate noise in each frame.
Pillow also provide another palette: Image.ADAPTIVE. This one has bad result dealing with gradient, but wouldn’t generate any noise.
There are two solutions for this situation:
from PIL import Imageimages = []#image list
###
solution one: when convert the RGB into P, manually do that but using default setting
###
gif = []
for image in images:
gif.append(image.convert("P",palette=Image.ADAPTIVE))
gif[0].save('temp_result.gif', save_all=True,optimize=False, append_images=gif[1:], loop=0)###
solution two: save each frame into gif format, then combine them into one final gif
###
byteframes = []
for img in images:
byte = BytesIO()
byteframes.append(byte)
img.save(byte, format="GIF")
gifs = [Image.open(byteframe) for byteframe in byteframes]
gifs[0].save("good.gif", append_images=imgs[1:], **save_kwargs)
Solution 1 save each frame into an unique gif, this operation is similar to change one .png file’s file name into .gif, the encode wouldn’t change and pillow will use Image.ADAPTIVE to convert, so when combine these gif together, there will be no need to convert to P again, so there won’t be noise.
Solution 2 convert frames into P, and use the Image.ADAPTIVE, so it do the same thing as solution 1 but faster(saved time of a lot of IO). In fact, solution 1 is provided by git issue.
There is also a lib in python called imageio, it can also generate gif.
import imageiogif = []each_image = imageio.imread("sample.png")# here read all imagesgif.append(each_image)imageio.mimsave("result.gif", gif, 'GIF')
After we test and look it source code, we think imageio is similar to solution 2, which means it also have the problem below.
Problem 2: For images that contain gradient, there will be one more problem: color dither.(in each frames, the color of some part should be always same, but it changed)
Below is one example of color dither (focus on the bottom right blue background):
In fact, till now for all the cv library, we still couldn’t find one solution that could solve both situation. If you want no noise, dither will appear, if no dither, noise will appear.
But we do manage to find a bad solution for such situation: use ImageMagick, combine python program with shell.
Above part claimed that we met a bad situation, and yes, using ImageMagick will generate a lot problems of deploy. So it just comes to us, since the reason of bad gif is because the git standard only allow 256 colors, why not we use something that has more standard — video.
In fact using python-opencv to convert images into video is very simple.
import cv2images = []#list of cv2 image obj
video = cv2.VideoWriter("test.avi", cv2.VideoWriter_fourcc(*'XVID'), 24, (1200,800))
for image in images:
video.write(image)
The main function is cv2.VideoWriter. It accept four basic parameters:
the route of result
cv2.VideoWriter_fourcc object
frames
video size(w,h), it must be same as the size of image
The import value for this function is fourcc, you can check it here. You can use fourcc(‘a’,’b’,’c’,’d’) or fourcc(*’abcd’) to apply the code into this function.
I will advice two fourcc that could be easy to use: (‘I’,’4',’2',’0') and (*’XVID’). Former one just stick frames together, very high quality with huge size. Last one generate video faster, and the video size even smaller than a single frame(very bad quality however).
Thanks for reading, and I hope to see you again in the next piece!