- People who just want a framebuffer and use SDL 1.2 blit and direct pixel access to do the job.
- People who just want an OpenGL context, and use OpenGL or OpenGL ES to do the job.
- People who want a hardware accelerated 2D API
For 1 and 2, the functionality was available in SDL 1.2 and the feature set is well understood.
For 3, this is a new area that SDL 1.3 is supporting, and it's really easy to lose the "Simple" in "Simple DirectMedia Layer"
So I think here is a good time to remember that the goal of the SDL rendering API is simply:
Hardware accelerate operations that were typically done with the SDL 1.2 API.
This is tricky, since people did lots of interesting things with direct framebuffer access, but let me break it down into a feature set:
- copying images
- filling rectangles
- drawing single pixel lines
- drawing single pixel points
SDL 1.2 provided colorkey, alpha channels, and per-surface alpha, as well as nearest pixel scaling.
Again, to break that down into a feature set:
- blending for all operations
- single vertex alpha for all operations
- single vertex color for all operations
- scaling for image copy operations
It's tempting to add functionality here, but that road lies madness...