Oztech's MODE simulator is a very powerful tool for use in testing autonomous systems. To augment the simulator a mesh that represents the world is required so that the sensors can present the world to the vehicle. Thus, we need to supply a mesh for use as a world.
One option for supplying a world is to create a fictional or virtual world based on some location in the real world. An example is our "TRC track world" which is based on the real TRC test track at a certain date, but is only a purely flat world.
Another example would be our "Oztech Test Area" world, which is based on a real mine but does not represent any specific mine. This was developed to demonstrate the operation of our articulated truck in a previous project and is still used for demonstration of different vehicles and scenarios.
Some work has been done with regard to simulating an urban setting. This was mostly done with SUMO by loading a Road Network Definition File (RNDF) which describes the layout of some set of roads. With SUMO simulating the behavior of vehicles and pedestrians, Gazebo can be used to simulate the autonomous vehicle's sensors and motion. More will be discussed on this in a later article.
The other option for creating a world to use in the simulation is to use depth sensors on a moving vehicle to create a scan of the real world. These scans can generate very accurate models of what the sensors of the simulated vehicle would see since they are based on sensors' views of the real world. A big limitation is if the simulated vehicle differs too much from the area the data collection vehicle went, then the returned sensor data on the simulated vehicle may differ from what it might actually sense in the real world. This is because the data collection vehicle never collected the data about that part of the world.
It is also preferable, for simplicity's sake, for the data to be either collected without many dynamic objects or for those dynamic objects to somehow be labeled in the dataset so that they can be removed and only the environment is left.
This world mesh in Figure 4 was generated based on the 1st scene in the PandaSet open dataset as an example. On the left, all of the points in the point cloud were used to generate the mesh while on the right all of the atmospheric effects, vehicles, and pedestrians were removed using the labeling supplied as a part of the dataset. It can be seen that the resulting mesh on the right is much cleaner and is more detailed without all of the noise from the dynamic objects.