Hagit Hel-Or
Title: Computer Vision for Human Behavior Understanding
Abstract :
In this talk I will present studies in which Computer Vision and Machine Learning are harnessed to study human behavior. We use various sensors and data capturing devices to study, body motion, hand motion, facial expression and emotion, and more. I will briefly review several studies performed in my lab, and will extend on one specific study in which we developed an automated system to evaluate fall detection, using a novel multi-3d-camera system (work in collaboration with Prof Ilan Shimshoni and Physiotherapists at the Nehariya Hospital).
Dan Levi
Title: Camera-based 3D Lane detection and other perception challenges in autonomous driving
Abstract :
I will introduce “3D-LaneNet”, a network that directly predicts the 3D layout of lanes in a road scene from a single image. This work marks a first attempt to address this task with on-board sensing without assuming a known constant lane width or relying on pre-mapped environments. Our network architecture, 3D-LaneNet, applies two new concepts: intra-network inverse-perspective mapping (IPM) and anchor-based lane representation. The intra-network IPM projection facilitates a dual-representation information flow in both regular image-view and top-view. An anchor-per-column output representation enables our end-to-end approach which replaces common heuristics such as clustering and outlier rejection, casting lane estimation as an object detection problem. In addition, our approach explicitly handles complex situations such as lane merges and splits. Results are shown on two new 3D lane datasets, a synthetic and a real one. For comparison with existing methods, we test our approach on the image-only tuSimple lane detection benchmark, achieving performance competitive with state-of-the-art.
Dan Feldman
Title: Visual Navigation for Drones
Abstract :
According to the law in Israel and US, you can use a drone inside the city or indoors only if its weight is less than 250 gram. Practically, this means that it can carry only a weak micro-computer and an RGB camera for autonomous navigation. This requires efficient real-time algorithms that do not exist today.
I will formalize some of these problems and suggest the first provably optimal and practical algorithms for some of them. This is by using modern optimization techniques such as core-sets, sketches, and Sum-Of-Squares (SOS).
Demo videos on toy-drones inside and outside the lab will also be presented.
Joint work with Ibrahim Jubran, Alaa Malouf, and Yair Marom.
Based on paper in ICRA’19 and Outstanding Award paper in NeurIPS’19.