Programmer/Researcher - Unity
My interest in video codecs, streaming, gaming, 3D engines, mobile development, and VR all culminated into a master's thesis where the purpose was to test a more optimized method of mobile cloud gaming.
Read the paper here
I was initially inspired by my boss and mentor at the Envision Center, George Takahashi, who told me the number one problem in pulling off these kinds of combined systems, especially those that include streaming information, is the problem of latency. Whether it be network latency, motion-to-photon latency, my thinking was to research ways to take these forms of latency out of the equation.
Through my research, I stumbled upon work by John Carmack talking about latency reduction methods such as View Bypass and Asynchronous time warp. The issue is this information was 'supposedly' scrubbed from the internet, so I had to do some internet sleuthing to find it. After reading through this work, I assumed the intention to remove it from the internet was proof of its viability as a latency reduction strategy, so I began to build an application that harnessed the View Bypass algorithm in a mobile cloud gaming setting. In this case, the mobile phone derives user input, the user position is sent to the server and processed by a powerful desktop, then the render frames are streamed back down to the phone to be displayed to the user. The idea here is a mobile phone can handle standard video streaming, but is not quite powerful enough to render a console-quality game.
The greatest challenge of this project was trying to determine which codec and streaming protocol would work best for my purposes. I looked at HLS streaming, DASH, and finally settled on WebRTC for a more real-time communication based data streaming method. While WebRTC gave me a lower latency, the setback was the drop in quality due to congestion control in order to be able to stream without buffering. Since my local internet connection can only get to speeds of around 10 Mbps, using a lightweight video codec (VP8) inside of WebRTC was a more viable option for cleanly displaying frames. The methodology of my work then tested the quality difference in this streaming model compared to the control image, and determining whether View Bypass or streaming without this algorithm produced image results that were closer in image structure to the original frame.