[SDL] Problem with ALSA only...

Eddahbi Karim installation_fault_association at yahoo.fr
Sat Nov 29 01:46:00 PST 2003


David Olofson wrote:

>
>Anyway, a 2048x2048 RGBA8 texture is 16 MB, which is rather big to use 
>as allocation granularity for many of the cards that support 
>2048x2048 textures. (Even some 16 MB cards support that large 
>textures, though it's obviously not physically possible to keep one 
>in VRAM together with the frame buffer. It's just supported because 
>textures can have less than 32 bpp, and/or because there are versions 
>of the card with more VRAM.)
>
>Maybe it would make sense to have some kind of internal limit here, 
>maybe related to the display resolution or something... Tiles larger 
>than the screen don't make much sense, even for huge surfaces.
>
Sure. Hey, some backends don't even support surfaces larger than the 
screen :)

> If 
>they do anything, it would be preventing OpenGL from swapping parts 
>of a huge surface (of which only a part at a time is used) out of 
>VRAM, to leave room for other data that is actually used every frame.
>
Swapping textures from/to video memory has a higher cost than binding 
new textures. I remember when one of my programs ran out of video 
memory, and the 3D performance really suffered. Increasing the 
granularity only makes this problem worse (to the point that the video 
ram might get re-filled many times per frame).

>
>Then again, texture binding has a significant cost on some cards, 
>
That would have to be benchmarked ;) I never found texture binding cost 
to be that high, especially if you compare it to the cost of swapping 
textures from video memory. Sure, the texture binding time is 
driver-dependent, but uploading a texture to video ram will always kill 
your performance.

>
>which makes this a balance act. Limit max texture size to twice the 
>size of the screen? Limit it so one texture uses less than 30% of the 
>available VRAM? Other ideas?
>
You could use VRAM size but... there is no portable way that I know of 
to find the video ram size in OpenGL :-/

Anyway, before starting using heuristics, you need some real-world 
measures like the statistical distribution of the surfaces sizes and 
such. I once tried to find an "average" surface size by running 
different programs and printing statistics, just to find that each 
program is really different : for example some allocate only small 
surfaces, others allocate random sizes, others keep a copy of the 
background... (for the record, the larger surfaces I could find were the 
size of the screen, and the average surface dimension (x or y) was 
around 100).

Anyway, I'm not sure this is a big deal, as there are OpenGL extensions 
called "NV_texture_rectangle" and "EXT_texture_rectangle" that do what 
their names say, ie prevent applications from wasting memory for non 2^n 
textures. So you could just wait for it to become part of the standard 
if you don't want to solve a NP-complete problem (I for one don't :)

Stephane






More information about the SDL mailing list