Simulations, AI enable GRIDSMART cameras to time traffic lights to MPG estimates
Each year, approximately 6 billion gallons of fuel are wasted as vehicles wait at stop lights or sit in dense traffic with engines idling, according to US Department of Energy estimates. The least efficient of these vehicles are the large, heavy trucks used for hauling goods—they burn much more fuel than passenger cars consume when not moving. But devising a way for such “gas-guzzlers” to make fewer stops in congested areas should result in fuel savings.
A first-year seed project funded by HPC4Mobility, the DOE Vehicle Technologies Office’s program for exploring energy efficiency increases in mobility systems, demonstrates how such a goal could be accomplished. Using the preexisting stop-light cameras of GRIDSMART, a Tennessee-based company that specializes in traffic-management services, researchers at DOE’s Oak Ridge National Laboratory have designed a computer vision system that can visually identify vehicles at intersections, determine their gas mileage estimates, and then direct traffic lights to keep less-efficient vehicles moving to reduce their fuel consumption.
In this data-centric age of artificial intelligence and machine learning, it may sound like a straightforward approach to a longstanding problem: let AI handle it. But proving such a system could work with current technology was a rather complicated puzzle that required fitting together a lot of different pieces: high-tech cameras, vehicle datasets, artificial neural networks, and computerized traffic simulations.
In fact, when R&D staff member Thomas Karnowski of ORNL’s Imaging, Signals, and Machine Learning Group first floated the idea, some of his colleagues were skeptical. Considering all the different variables that might affect fuel economy, could a mere vehicle image really provide enough data to program traffic lights for less waste?
“Sometimes you’ve got to reach a little bit to find solutions and figure out what’s possible,” Karnowski said.
Scientists may utilize cutting-edge technology and centuries of scientific research to tackle big questions, but they are also often guided by a basic human instinct: a hunch. In this case, Karnowski was sure he could find a way to teach cameras how to identify vehicles’ fuel economy and then send that information to a grid-wide traffic-control system. And Karnowski and his multidisciplinary team at ORNL did just that—though this proof-of-concept experiment is just the first step in planning a real-world implementation.
Eyes in the sky
To make such a camera-based control system work in the first place requires smart cameras placed at high-traffic intersections, able to capture images of vehicles and equipped to transmit the data. Fortunately, such camera systems do exist—including one produced by GRIDSMART, a company located just a few miles from the ORNL campus in East Tennessee.
GRIDSMART’s camera systems are installed in 1,200 cities globally, replacing traditional ground sensors with overhead fisheye cameras that provide horizon-to-horizon vision tracking for optimal traffic-light actuation. But that’s not all they do—the bell-shaped cameras connect to processor units running GRIDSMART client software that provides municipal traffic engineers with very detailed information, from traffic metrics to unobstructed views of accidents.
“In addition to detecting vehicles, bicycles, and pedestrians for intersection actuation, the GRIDSMART processor counts vehicles and bicycles moving underneath the camera,” said Tim Gee, principal computer vision engineer at GRIDSMART. “For each vehicle count, we determine a length-based classification and what type of turn the vehicle made as it went through the intersection.”
This data can be used to adjust intersection timings to improve the flow of traffic. Additionally, the vehicle counts can be taken into consideration when planning for construction or lane changes, as well as helping measure the effects of traffic-control changes.
GRIDSMART’s system sounded like the perfect testbed for Karnowski’s big idea, so he pitched it to the company. Gee and other engineers there liked what they heard. The project could open up new avenues of data usage for the company; instead of measuring only the time spent in an intersection, this proposed system would allow GRIDSMART cameras to actually make an impact on the environment.
“This isn’t something GRIDSMART would have had the resources to conduct on its own,” Gee said. “GRIDSMART is focused on developing and improving its traffic control and analysis systems, whereas ORNL provides a broad scientific and engineering background as well as world-class computing resources.”
The team’s first step in February 2018 was to use GRIDSMART cameras to create an image dataset of vehicle classes. With GRIDSMART cameras conveniently installed on the ORNL campus, the team also employed a ground-based roadside sensor system being developed at ORNL, allowing them to combine the overhead images with high-resolution ground-level views. Once vehicle-classification labels were applied using commercial software, and DOE fuel-economy estimates added, the team had a unique dataset to train a convolutional neural network for vehicle identification.
The resulting ORNL Overhead Vehicle Dataset showed that GRIDSMART cameras could indeed successfully capture useful vehicle data, gathering images of approximately 12,600 vehicles by the end of September 2018, with “ground truth” labels (makes, models, and MPG estimates) spanning 474 classifications. However, Karnowski determined that these classifications weren’t numerous enough to effectively train a deep learning network—and the team didn’t have sufficient time left in their year-long project to gather more. So, where to find a larger, fine-grained vehicle dataset?
Karnowski recalled a vehicle-image project by Stanford University researcher Timnit Gebru that identified 22 million cars from Google Street View images, classifying them into more than 2,600 categories (such as make and model) and then correlating them with demographic data. With Gebru’s permission, Karnowski downloaded the dataset, and the team was ready to create a neural network as the second step in the project.
Gebru had used the influential AlexNet convolutional neural network for her project, so the team decided to try adapting it, too.
“We got the same neural network and retrained it on her data and got very similar results to what she got—the difference is that we then used it to estimate fuel consumption by substituting vehicle types with their average fuel consumption, using DOE’s tables. That was a bit of an effort, too, but that’s what it’s all about,” Karnowski said.
The team produced another neural network for comparison using the Multinode Evolutionary Neural Networks for Deep Learning (MENNDL), a high-performance computing software stack developed by ORNL’s Computational Data Analytics Group. A 2018 finalist for the Association for Computing Machinery’s Gordon Bell Prize and a 2018 R&D 100 Award winner, MENNDL uses an evolutionary algorithm that not only creates deep learning networks but also evolves network design on the fly. By automatically combining and testing millions of “parent” networks to produce higher-performing “children,” MENNDL breeds optimized neural networks.
Using Gebru’s training dataset, Karnowski’s team ran MENNDL on the now-decommissioned Cray XK7 Titan—once rated as the most powerful supercomputer in the world at 27 petaflops—at the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility at ORNL. Karnowski said that while MENNDL produced some novel architectures, its network’s classification results didn’t supersede the accuracy of the team’s AlexNet-derived network. With additional time and image data for training, Karnowski believes MENNDL could have produced a more optimal network, but the team was nearing its deadline.
It was time to put the pieces of the proposed system together and see whether it could actually work.
Virtual urban mobility
Lacking an available city-wide grid of intersections equipped with GRIDSMART traffic lights, Karnowski’s team instead turned to computer simulations to test their system. Simulation of Urban MObility (SUMO) is an open-source simulation suite that enables researchers to model traffic systems, including vehicles, public transportation, and even pedestrians. SUMO allows for custom models, so Karnowski’s team was able to adapt it to their project. Adding a “visual sensor model” to the SUMO simulation environment, the team used reinforcement learning to guide a grid of traffic-light controllers to reduce wait times for larger vehicles.
“In a real GRIDSMART system, they just send vehicle data to a controller, and it says, ‘I’ve got cars waiting, so it’s time to change the light.’ In our proof-of-concept system, that information would then be fed to a controller that can look at multiple intersections and try to say, ‘We’ve got high-consumption vehicles coming in this direction, and lower-consumption vehicles in this other direction—let’s change the light timing so we favor the direction where there’s more fuel consumption.’”
The method was tested under a variety of traffic scenarios designed to evaluate the potential for fuel savings with visual sensing. In particular, some scenarios with heavy truck usage suggested savings of up to 25 percent in fuel consumption with minimal impact on wait times. In other scenarios, the simulated system was trained with heavy truck usage but evaluated on more balanced test-traffic conditions. The savings are not quantified, but the trained reinforcement learning control easily adapted to the new conditions.
All these test cases were limited to establish proof-of-concept, and more work is needed to accurately assess the impact of this approach. Karnowski hopes to continue developing the system with larger datasets, improved classifiers, and more expansive simulations.
GRIDSMART, meanwhile, considers the project’s results to foreshadow promising new services for their customers.
“This study gives us ideas for how our system could be used in the future for more than just reducing congestion. It could actually save energy and help the environment,” Gee said. “Currently there are no announced plans for a related product feature, but someday we may be able to enable this novel optimization in real time or use it to provide additional reporting. I think municipalities would be interested in such technologies to save fuel and improve air quality.”
Not every project conducted at a national lab results in a complete solution to a vexing issue—but by taking a swing at persistent problems, researchers can gather valuable knowledge along the way.
“We did show that you could use GRIDSMART cameras to estimate vehicle fuel consumption. We did show that you could use multiple GRIDSMART cameras to save energy using reinforcement learning. We made a useful dataset that we think could be used by other folks in the future. And we also did show that MENNDL could evolve topologies that could help estimate vehicle fuel consumption visually,” Karnowski said.
Work was funded by the Vehicle Technologies Office’s HPC4Mobility seed project program of the US Department of Energy’s Office of Energy Efficiency and Renewable Energy.
UT-Battelle LLC manages Oak Ridge National Laboratory for DOE’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science.
Related Publication: Karnowski, R. Tokola, S. Oesch, M. Eicholtz, J. Price, and T. Gee, “Estimating Vehicle Fuel Economy from Overhead Camera Imagery and Application for Traffic Control.” Paper presented at IS&T International Symposium on Electronic Imaging Science and Technology, Burlingame, CA, January 26–30, 2020.