I am a Senior Build Engineer at Luminar Technologies working on a monorepo build system and the related infrastructure.
I am passionate about software engineering and the creation of scalable code, which is maintainable over time. Currently, I am teaching these principles at TUM to undergraduate students by offering the Software Engineering Lab.
I studied Electrical and Computer Engineering at the Technical University of Munich (TUM) for both Bachelor and Master. From 2016 to 2018, I was a Software Engineer at the Objective Software GmbH and Luxoft Inc and worked in cooperation with the BMW Group in the area of Automotive and Autonomous Driving. From 2019 to 2022, I worked at the Chair of Media Technology at TUM as a Research and Teaching Associate in the research group of Prof. Dr.-Ing. Eckehard Steinbach, where I received my Engineering Doctorate in 2022. My research at TUM was focused on video processing, compression, and transmission of multi-camera systems for autonomous and teleoperated driving. In 2022, I was a Senior Software Engineer at CareX.AI working on the software architecture and quality of our codebase. My research at CareX.AI is focused on camera-based vital signs measurement.
Selection of courses & visits
Failures of autonomous vehicles are inevitable. One possible solution to cope with these failures is teleoperated driving, where a human operator controls the vehicle from a remote environment. In this thesis, adaptive video streaming for teleoperated driving is investigated to provide the operator with the best possible situation awareness when controlling the vehicle from remote. A teledriving framework for the adaptation of individual camera views based on the current traffic situation is developed. Additionally, a preprocessing filter concept is proposed that allows for individual rate/quality adaptation while considering the hardware limitations of autonomous vehicles.
Programming and software engineering differ by the aspect of time and scale. Going beyond just implementing software that fulfills requirements, software engineering also means writing code that can be maintained by multiple contributors over months, years or even decades. Due to the limited time of university projects, students mainly learn to focus on writing software that works once. In industry, software lifetime is longer and the aspect of time becomes highly relevant. Professional software must be readable and modular to be maintainable. In this paper, we present an experience report on a novel university course in software engineering. The course teaches the concepts of unit testing, refactoring, and automation tools to novices with basic programming experience. We present those concepts for the example of C++, but they are applicable to any programming language. Our goal is to teach students the key concepts of software engineering early on, giving them the opportunity to benefit from these concepts in their further projects. We present these concepts in five plenary lectures with live coding sessions, and then student teams apply the concepts in five practical homework assignments. All assignments contribute to a single project maintained and improved by the student groups for the duration of the course. Additionally, we present a teaching tool framework that can be used to automate tasks for student project management and examinations. Finally, we discuss the lessons learned from conducting this course for the first time. We believe this course is a valuable step towards including essential software engineering skills in the education of science and engineering students.
Currently, an increasing number of technical systems are equipped with multiple cameras. Limited by cost and size, they are often restricted to a single hardware encoder. The combination of all views into a single superframe allows for streaming all camera views at the same time, but it prevents individual rate/quality adaptations on those camera views. We propose a preprocessing filter concept that allows for individual rate/quality adaptation while using a single encoder. Additionally, we create a preprocessor model that estimates the required preprocessing filter parameters from the specified encoding parameters. This means our approach can be used with any existing multi-view adaptation scheme designed for controlling multiple encoders. We design both an analytical and a Machine Learning-based bitrate model. Because both models perform equally well, we suggest using either one as the core part of our preprocessor model. Both models are specifically designed for estimating the influence of the quantization parameter, frame rate, frame size, group of pictures length, and a Gaussian low-pass filter on the video bitrate. Furthermore, the rate models outperform state-of-the-art bitrate models by at least 22% regarding the overall root mean square error. Our bitrate models are the first of their kind to consider the influence of a Gaussian low-pass filter. We evaluate the preprocessing approach by streaming six camera views in a teledriving scenario with a single encoder and compare it to using six individual encoders. The experimental results demonstrate that the preprocessing approach achieves bitrates similar to the individual encoders for all views. While achieving a comparable rate and quality for the most important views, our approach requires a total bitrate that is 50% smaller than when using a single encoder approach without preprocessing.
Teledriving is a possible fallback mode to cope with failures of fully autonomous vehicles. One important requirement for teleoperated vehicles is a reliable low delay data transmission solution, which adapts to the current network conditions to provide the operator with the best possible situation awareness. Currently, there is no easily accessible solution for the evaluation of such systems and algorithms in a fully controllable environment available. To this end we propose an open source framework for teleoperated driving research using low-cost off-the-shelf components. The proposed system is an extension of the open source simulator CARLA, which is responsible for rendering the driving environment and providing reproducible scenario evaluation. As a proof of concept, we evaluated our teledriving solution against CARLA in remote and local driving scenarios. The proposed teledriving system leads to almost identical performance measurements for local and remote driving. In contrast, remote driving using CARLA’s client server communication results in drastically reduced operator performance. Further, the framework provides an interface for the adaptation of the temporal resolution and target bitrate of the compressed video streams. The proposed framework reduces the required setup effort for teleoperated driving research in academia and industry.