![]() ![]() ![]() Though you can’t do that with the video of course (in this case), that has to be transcoded because we are creating a new video source.īelow walks through a pipeline of determining how to process each video 1. would simply re-use the audio from the source file. The -acodec copy / -c:a copy that you have in your command f.e.Encodes the video using the libx264 codec (H264). -c:v libx264 is an abbreviated version of codec:v.-pix_fmt yuv420p: When outputting H.264, adding -vf format=yuv420p or -pix_fmt yuv420p will ensure compatibility so crappy players can decode the video.: here we create 2 video tracks for the output, where is the original video input and is the image to be overlaid as another video track ( IMAGE_OVERLAY.png).XXXX is the horizontal pixels and YYYY is the vertical pixels (from top left corner 0:0) overlay=XXXX:YYYY the position to overlay the image.with assumption nox sensor 2 bank 1 mercedes location. IMAGE_OVERLAY.png: is the nadir (for equirectangular videos) or watermark (for normal videos) Then, I may simply do this command to convert the mp4 video to m3u8.INPUT.mp4: is the video file (note, this post only considers stitched mp4 files from the GoPro 360 cameras – Fusion and MAX).$ ffmpeg -i INPUT.mp4 -i IMAGE_OVERLAY.png \ -filter_complex " overlay=XXXX:YYYY" \ -pix_fmt yuv420p -c:a copy \
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |