Supports capture devices as a source of media, allowing customers to show live camera feeds, captured imagery from other computers or streamed media from the internet.
These platforms provide an easy-to-use powerful set of tools to create 3D scenes – whether for a game, simulation environment or visitor attraction.
By pre-rendering the parts of the 3D scene that are known to be consistent, and require no real-time interaction, it is possible to focus the real-time graphics generation on the parts of the scene that are required to be truly interactive. The playback media (pre-rendered) and real-time parts are composited dynamically, using depth information generated by the real-time engine. In this mode, pixels that are occluded, or exist in the pre-rendered frame, are culled so that no unrequired pixels are added to the frame load.
Functionality is also provided for multiple real-time feeds to be composited together in 3D to enable real-time scene rendering across multiple render nodes.
Add another dynamic factor with real-time stencil feed. Here, the R, G and B channels are utilised to carry information for the real-time assets, to add dynamic lighting, shading and transparency effects.