the accumulated works of Brigit A. Schroeder


email: beejisbrigit-at-gmail.com [CV] [Project Portfolio] [YouTube]

I am currently a doctoral student at the University of California, Santa Cruz and also collaborating with Stanford University as a Visiting Scholar. At UC Santa Cruz, I am working with deep learning, computer vision and robotics to develop assistive technology for persons with visual impairments. At Stanford, I am working on ways to use semantic navigation to improve robot the way robots traverse through dynamic environments.

I was previously a doctoral student in both the Computer Vision and Machine Learning Group (Dr. Kate Saenko) and Engaging Computing Group (Dr. Fred Martin) at the University of Massachusetts, Lowell. I am interested in applying machine learning to computer vision and robotics to build systems that perform intelligently and efficiently. The current focus of my research is multi-class object recognition and vision for mobile and robotic platforms (detection and tracking).

Below right is an unblended mosaic of the Notre Dame de Amiens Cathedral in France I created. Computational photography is my form of artistic expression. ;-)

RESEARCH

Semantic Navigation for Social Robots [example on YouTube]

I was a Visiting Scholar at Stanford University, hosted by Dr. Silvio Savarese in the Computational Vision and Geometry Group (CVGL), during summer 2016 and am actively collaborating in with his group as part of the Jackrabbot project. For this project, I am working semantic navigation to improve the way a robot traverses through dynamic environments.

Magic Leap: Relocalization with Deep Metric Learning

I was a Deep Learning Graduate Intern at Magic Leap in Mountain View, CA, in the Advanced Technologies Group working with Andrew Rabinovich and Jean-Yves Bouget. While at Magic Leap, I researched ways to improve camera relocalization using deep metric learning as a way to improve SLAM tracking. I also integrated my deep relocalization network on Magic Leap's 1st generation AR/CR hardware.

ECCV 2014: Deconstructng the Deformable Parts Model: Do More with Less - Parts and Attributes Workshop, European Conference on Computer Vision(ECCV) 2014, Zurich, Switzerland [pdf]

Multi-class Object Detection in Big Data: Advisor: Dr. Kate Saenko

I am evaluating the computational trade-offs of various statistical models (e.g., bagging) for object detection, such as the deformable parts mode (DPM) which is the current state-of-the-art for detection. I have also investigated how this can be applied to human attribute detection in 'big data' sets (e.g. PASCAL Visual Object Classes Challenge dataset).

 

 

 

 

 

 

 

 

 

2013 NASA RASC-AL EXPLORATION ROBO-OPS COMPETITION - 1ST PLACE TEAM: Graduate Computer Vision Lead - Advisor: Dr. Holly Yanco, UMass Lowell Robotics Lab

In June 2013, I was the Computer Vision Lead for the UMass Lowell entry to the 2013 NASA RASC-AL RoboOps Competition at NASA Johnson Space Center in Houston. I designed and implemented a persistent mult-camera rock detection system in ROS that ran live across four cameras on a Mars Rover prototype. The video below is a prototype demonstrating the rock detection algorithm.

[BLOG: UMass Lowell Roverhawks] [Feature Article]

 

 

 

 


 


 

3D SLAM (Simultaneous Localization and Mapping

I was Co-PI and Technical Lead for a robotics and computer vision research project entitled ‘3D SLAM (Simultaneous Localization and Mapping)’ with a focus on 3D localization and feature tracking providing a virtualized photorealistic 3D view of the robot's environment. The research focus was improved situational awareness for for UGVs in urban environments. (The MITRE Corp.)

I developed and implemented an optical flow-based visual odometry algorithm for a stereo-based 2D and 3D feature tracking and localization system to perform multi-frame alignment as part of the 3D SLAM computer vision system.

[Related SPIE paper]

 

Monitor Tracking With High Motion Blur For Eyetracking Cameras

I designed and implemented a fast monitor tracking system to be used with the Mobile Eye XG eye tracking system and ASL Results Plus gaze analysis software. This allowed real-time gaze coordinates to be projected into a virtual video representation of the user's desktop. The video has been slowed down to demo tracking, but can track quickly at 20-30 fps.

This work was released as a new feature the latest ASL Results Plus package.


 

3D Mantle Convection Below the Surface of the Earth

The following are high resolution images of 3-D mantle convection, from work I did as an undergraduate intern at Minnesota Supercomputer Institute in the Computational Geophysics Group. The basic premise: warm stuff goes up (red) and cold stuff falls through (blue).


An overview of the mantle.
A convection plume.
Detail of convection around plume.
Another view of the mantle.
MPEG - rotating view of 3-D mantle convection.