The goal of this project was to stabilize the Processing Video Library to v2.0, which is based on the GStreamer media framework. The aim here was to handle library side back-end code so that users using the Processing Development Environment (PDE) can simply run their Processing editor for high-quality video playback, depending on user specifications. The tasks included upgrading the video framework, upgrading to a native buffer playback, improve capture support, tackling notable bugs, and providing documentation to better use the library. The goal is to handle the back-end framework seamlessly so that users can focus on easy video playback for their own projects.
The first task for upgrading the library included updating it with the newest version of GStreamer, as previous version of Processing Video used binaries prior to GStreamer 1.X, which is a much more modern and stable set of GStreamer plugins for better video playback. Some additions with GStreamer 1.X include 4k video playback, more optimized media streaming, and a wider range of functionality.
With updating the GStreamer binaries pointed by gstreamer.library.path and gstreamer.plugin.path, some modifications were made to the LibraryLoader.java class to load the appropriate more updated .dlls associated with GStreamer 1.X. Additional upgrades to the library include upgrading to JNA 5.4.0 and gst-java-core-1.1.0.
The second task involved utilizing the native buffer sink to handle the copy method. Since copying pixels to the texture is where most of the graphics processing time takes place, it makes sense to directly copy the texture information directly from GStreamer to a Processing texture, without an intermediary buffer. My mentor, Andres, aided me greatly with the understanding of this implementation. It boiled down to using the gst-java-core bindings to call the copyBufferFromSource method on the Object bufferSink inside NewSampleListener. This implementation allowed for much smoother playback and adheres much better to the updated version of the gst-java-core bindings.
The third task was to configure capture support with GStreamer 1.X and address some common capture bugs by upgrading the underlying media capture and making some adjustments to how devices are defined. For example, users had noted issues where the library would malfunction if the user attempted to use multiple same name webcams. The library previously would check the device by its display name, so if two devices had the same display name, only the first device would be taken. One potential solution was to simply list the ‘raw’ device names, which all have unique naming conventions, but the end user would not know this name easily, nor would they recognize it. The implemented solution included keeping the display readable names and then checking for device duplicates. If there are duplicates, numerically name the devices as ‘DeviceName #n’, so it is still readable to the user. Now users can switch between same name cameras and have more control over the capture system.
Once the primary tasks were completed, my mentor and I explored some common issues people were having as issues on GitHub, and the task was to address the top prominent bugs that multiple people were experiencing.
One of the most prominent was the Frames.pde example as part of Movie playback was ‘sticking’ and users were not able to smoothly scrub through frames like one might expect. While this bug was elusive, the simple solution was to flesh out the NewPrerollListener to function similarly to NewSampleListener. With only the NewSampleListener in place, videos could playback normally with no frame manipulation, however; once the user gained control to scrub through frames, the media framework did not have enough time to pre-buffer the appropriate frame on demand. By implementing the rest of NewPrerollListener to function in a similar fashion to NewSampleListener, the Frames.pde can prebuffer nearby frames so a user can more easily scrub between frames in either direction.
Another simple feature added to this project was printing out user information about which version of GStreamer they are using to the Processing console. Most users will just use the bundled library; however, some experienced users might want to save space or use their own system install of GStreamer. If an experienced user removes the bundled GStreamer files, the library will automatically find the system install and inform the user where the new root directory is located.
Finally, some documentation was added to inform users on how to better develop with the library. Documentation included adding a test case for video capture in Eclipse setup instructions and the steps for installing the library and using it seamlessly on a windows system.
Overall, the greatest challenges came with learning the new tools and framework associated with the Processing Video Library. As a game programmer, using Eclipse, Java, gst-java-core, and following GStreamer documentation and compiling examples written in C, were all unfamiliar topics at the start of this summer. Testing examples that use gst-java-core bindings were usually not possible, as the number of users that utilize this is small, and working examples are few and far in-between. Therefore, learning about GStreamer and gst-java-core bindings was best done by working in the Processing Video library itself as a working test environment.
Other challenges presented involved issues working with the native buffer and getting it functional. This graphics issue was generally met with a blank screen and was not something easily debugged as opposed to something running off the CPU. Some other challenges involved the issues surrounding JVM: between testing in the Processing Editor and Eclipse, sometimes I would be met with errors with the JVM process was not completely killed. This tended to be the culprit if I unexpectedly was met with a new set of Java errors.
Work To Be Done
Two major improvements could be made to the library, but due to time constrains and the complexity of the task, were left uncompleted.
The first, more ambitious future feature would allow users to utilize their own dynamic media pipeline. The experienced users would be able to input their own pipeline string, using the same syntax as one would when running GStreamer from the command line. This would give users more control on how they would like video playback to be done. As it stands, Movie.java utilizes a template pipeline called Playbin that does simple playback. In Capture.java, the pipeline defaults to using autovideosrc, videoscale, videoconvert, and capsfilter. If users had more control of the pipeline, they could theoretically do any possible function of GStreamer from within the Processing editor window using something similar to gst_parse_launch(), discussed in the GStreamer documentation.
The second future feature might involve implementing debug filter control so users can parse their own warning messages. The benefit of this is having GStreamer-side debug control when using the dynamic pipeline, or this feature might give users the ability to turn off unnecessary warnings that might not affect what they are trying to do from the Processing editor. While this feature is not accessible via gst-java-core-1.1.0, after talking with the developer, it may become a feature in the future.