JITTER extends the Max/MSP programming environment to support realtime manipulation of video, 3D graphics and other data sets within a unified processing architecture. Because Jitter, like Max/MSP, is generic in nature, it offers unlimited possibilities for creative exploration. Whether you are interested in video processing, interactive art, teaching new media, or data visualization, Jitter offers both high and low level tools for working in exciting new ways.
Jitter abstracts all data as multidimensional matrices, so objects that process images can also process audio, volumetric data, 3d vertices, or any numerical information you can get into the computer. Jitter's common representation simplifies the reinterpretation and transformation of media. And with Jitter 1.5, many types of data can be processed on the GPU, leveraging the massively parallel computing power of today's latest graphics cards.
The Jitter 1.5 upgrade was released July 2005. The list of improvements is extensive.
Performance
* Cross platform architecture for GPU hardware acceleration. This fundamental shift in processing offers a dramatic increase in performance for image processing and general purpose computation, as well as material shaders and other GPU effects.
* Fast YUV 4:2:2 (UYVY) video transfer to the GPU. Using the half bandwidth, half chroma data representation common in many of today's video formats, Jitter is able to more rapidly decompress and transfer YUV 4:2:2 data to the graphics card in real time. Together with GPU processing, this permits processing of up to HD resolution video footage in real time.
* Multiprocessor support. Many CPU based Jitter objects now exploit multi-processor and multi-core systems to take advantage of all the resources available to today's systems.
Networking
* Compressed RTSP streams. Jitter 1.5 uses either the cross platform LiveMedia architecture, or the QT Broadcasting architecture on Macintosh to stream a variety of video codecs in real time.
* Uncompressed Jitter matrix streams. For lossless transmission of Jitter matrices of arbitrary type, plane count, and dimensionality, Jitter 1.5 allows direct network communication in a Jitter native matrix format.
Programmability
* Java and JavaScript support. Instantiate and control Jitter objects directly from text based programming languages that offer greater control of complex tasks.
* Expressions. Succinctly define mathematical expressions for calculating matrix data.
Interoperability
* Direct X video input and output support. Native Windows video input and output offers higher performance and more reliable I/O, without the need for third party Quicktime bridges.
* FreeFrame plugin support. An open standard for video processing, FreeFrame plugins provide a rich set of options for building custom networks of effects.
* Improved Flash integration. More control over and communication with Flash media through Quicktime. (Currently only supports Flash 5 subset of functionality).
* Improved MSP audio integration. Treat MSP buffer objects as Jitter matrix data, and convert audio vectors to and from Jitter matrices for frame based processing and audio visualization.
More
* Procedural texturing and geometry. Synthesize textures and geometry with a diverse set of noise and pattern basis functions.
* Volume visualization. View volume data sets either as 3d textures or geometry obtained through surface reconstruction.
* High Dynamic Range image support. Read and write floating point images using the industry standard OpenEXR file format.
Jitter abstracts all data as multidimensional matrices, so objects that process images can also process audio, volumetric data, 3d vertices, or any numerical information you can get into the computer. Jitter's common representation simplifies the reinterpretation and transformation of media. And with Jitter 1.5, many types of data can be processed on the GPU, leveraging the massively parallel computing power of today's latest graphics cards.
The Jitter 1.5 upgrade was released July 2005. The list of improvements is extensive.
Performance
* Cross platform architecture for GPU hardware acceleration. This fundamental shift in processing offers a dramatic increase in performance for image processing and general purpose computation, as well as material shaders and other GPU effects.
* Fast YUV 4:2:2 (UYVY) video transfer to the GPU. Using the half bandwidth, half chroma data representation common in many of today's video formats, Jitter is able to more rapidly decompress and transfer YUV 4:2:2 data to the graphics card in real time. Together with GPU processing, this permits processing of up to HD resolution video footage in real time.
* Multiprocessor support. Many CPU based Jitter objects now exploit multi-processor and multi-core systems to take advantage of all the resources available to today's systems.
Networking
* Compressed RTSP streams. Jitter 1.5 uses either the cross platform LiveMedia architecture, or the QT Broadcasting architecture on Macintosh to stream a variety of video codecs in real time.
* Uncompressed Jitter matrix streams. For lossless transmission of Jitter matrices of arbitrary type, plane count, and dimensionality, Jitter 1.5 allows direct network communication in a Jitter native matrix format.
Programmability
* Java and JavaScript support. Instantiate and control Jitter objects directly from text based programming languages that offer greater control of complex tasks.
* Expressions. Succinctly define mathematical expressions for calculating matrix data.
Interoperability
* Direct X video input and output support. Native Windows video input and output offers higher performance and more reliable I/O, without the need for third party Quicktime bridges.
* FreeFrame plugin support. An open standard for video processing, FreeFrame plugins provide a rich set of options for building custom networks of effects.
* Improved Flash integration. More control over and communication with Flash media through Quicktime. (Currently only supports Flash 5 subset of functionality).
* Improved MSP audio integration. Treat MSP buffer objects as Jitter matrix data, and convert audio vectors to and from Jitter matrices for frame based processing and audio visualization.
More
* Procedural texturing and geometry. Synthesize textures and geometry with a diverse set of noise and pattern basis functions.
* Volume visualization. View volume data sets either as 3d textures or geometry obtained through surface reconstruction.
* High Dynamic Range image support. Read and write floating point images using the industry standard OpenEXR file format.
Comments
Post a Comment