[SDL] IMG_Load performance was Problem combining sw_scale withblit operations on surface

michael brown n5qmg at earthlink.net
Thu Apr 9 11:25:24 PDT 2009


Donny Viszneki wrote:
> On Thu, Apr 9, 2009 at 10:12 AM, michael brown <n5qmg at earthlink.net>
> wrote:
>> A little about what I'm trying to do:
>> I have a v4l program that captures 640x480 frames at 30fps from a
>> security camera. When the program detects motion, it compresses the
>> frames into individual JPEG files. This all works great. I'm trying
>> to build a simple media player type application to review and/or
>> animate the images at up to 30 frames/second. This means I have to
>> accomplish everything for each frame in less than 33mS.
>
> This is a silly way to go about things. Don't wait for motion and then
> capture a JPEG, just capture to a good VBR video, and write out extra
> data about when motion occurs. You can get higher image quality at
> higher compression ratios than in your current scheme.

You say that, but I believe that it's only because you don't fully know what 
I want to accomplish here.  On top of that, I'm tinkering in the manner that 
I do which isn't always what others [experts] perceive as sensible. 
Regardless, it's how I learn and I find that it works pretty well for me. 
:-)  The ultimate goal for me here is to learn as much as possible about how 
stuff works.  Eventually I'll move on to something else and this 
project/obsession will die of neglect.

I don't want anything captured most of the time, only during motion (as I 
define it at capture time).  The JPEG frames are less than 100K and are of 
good quality.  This represents a compression ration of 10:1 since capture 
frames are nearly 1MB, which I find to be quite acceptable.  libJPEG 
compresses most images in under 15mS on my machine leaving plenty of time 
for my motion analyzer to run on each frame in real time.  I will be writing 
extra information while I'm capturing, but only to identify a [very small] 
subset of the captured frames.

This brings us to the main requirement: the smallest amount of delay 
possible from capture to display on a remote X desktop when viewing the most 
recent data, while still allowing very responsive VCR type control (pause, 
instantly jumping back and forth by varying amounts, etc).  So far I've been 
able to keep things well below 500mS.  I also want the ability to easily put 
together small video segments with ffmpeg that start/stop at any capture 
frame.  Each frame has text that was overlayed at capture time, and every 
frame has to be able to be reconstituted later on with this information 
intact and readable.

> Don't use SDL for this, use GStreamer!

Thanks, I hadn't found that yet.  That's the trouble I have with pursuing 
previously unexplored avenues on Linux these days, finding the most 
recent/advanced packages.  Too much dead wood lying around on the internet.




More information about the SDL mailing list