Home
ⓢⓐⓨⓔⓕ'𝓼 𝓽𝓮𝓬𝓱 𝓫𝓵𝓸𝓰
About
Query in nuScenes Dataset: Visualize LIDAR Point Clouds for a Single Instance
Published
2020-06-08 10:11
Tagged on
pointclouds
nuscenes
autonomous-driving
 ### 1. Introduction ------ In the previous post ([Introduction to nuScenes Dataset](http://sayef.tech/post/introduction-to-nuscenes-dataset-for-autonomous-driving/)) of this series, we got to know what is nuScenes dataset and what we can do with it. Today we will be creating a small animation where we will extract and visualize the LIDAR point clouds of a single track. ### 2. Let's Code ------ - Before doing anything real, we have to import the necessary libraries. Let's do that first. ``` import matplotlib.pyplot as plt import numpy as np from matplotlib import rcParams from matplotlib.axes import Axes from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas from matplotlib import rc, animation from matplotlib import animation from IPython.display import HTML %matplotlib inline from nuscenes.nuscenes import NuScenes from nuscenes import NuScenesExplorer as nusc_explorer from nuscenes.utils.data_classes import LidarPointCloud from nuscenes.utils.geometry_utils import BoxVisibility import numpy as np nusc = NuScenes(version='v1.0-mini', dataroot='data/nuscenes/', verbose=True) ``` #### Output: =.===== Loading NuScenes tables for version v1.0-mini... 23 category, 8 attribute, 4 visibility, 911 instance, 12 sensor, 120 calibrated_sensor, 31206 ego_pose, 8 log, 10 scene, 404 sample, 31206 sample_data, 18538 sample_annotation, 4 map, Done loading in 1.2 seconds. =.===== Reverse indexing ... Done reverse indexing in 0.1 seconds. =.===== - There are 23 different categories of objects. Let's see what are those: ``` for category in nusc.category: print(category['name']) ``` #### Output: human.pedestrian.adult human.pedestrian.child human.pedestrian.wheelchair human.pedestrian.stroller human.pedestrian.personal_mobility human.pedestrian.police_officer human.pedestrian.construction_worker animal vehicle.car vehicle.motorcycle vehicle.bicycle vehicle.bus.bendy vehicle.bus.rigid vehicle.truck vehicle.construction vehicle.emergency.ambulance vehicle.emergency.police vehicle.trailer movable_object.barrier movable_object.trafficcone movable_object.pushable_pullable movable_object.debris static_object.bicycle_rack - Let's pick `vehicle.car` as our potential target instance. We can list all the instances with this category first. Following piece of code provides a list of instances those belong to `vehicle.car` category. ``` vehicles = [] for i in range(len(nusc.instance)): instance = nusc.instance[i] category = nusc.get('category', instance['category_token']) if 'vehicle.car' in category['name']: vehicles.append(i) ``` - In nuScenes dataset, there are a total of 12 sensors. We can have a look at the sensor list using the following code snippet. ``` for sensor in nusc.sensor: print(sensor['channel']) ``` #### Output: CAM_FRONT CAM_BACK CAM_BACK_LEFT CAM_FRONT_LEFT CAM_FRONT_RIGHT CAM_BACK_RIGHT LIDAR_TOP RADAR_FRONT RADAR_FRONT_RIGHT RADAR_FRONT_LEFT RADAR_BACK_LEFT RADAR_BACK_RIGHT - For the sake of easy visualization, let's consider only those instances that can be seen from the front camera only. ``` def belongs_to(anntoken, expected_cam='CAM_FRONT'): boxes = [] ann_record = nusc.get('sample_annotation', anntoken) sample_record = nusc.get('sample', ann_record['sample_token']) _, boxes, _ = nusc.get_sample_data(sample_record['data'][expected_cam], box_vis_level=BoxVisibility.ANY, selected_anntokens=[anntoken]) if len(boxes) == 1: return True return False cam_front_vehicles = [] for i in vehicles: instance = nusc.instance[i] ann_token = instance['first_annotation_token'] if belongs_to(ann_token): cam_front_vehicles.append(i) ``` - We have missed one thing important that the previous code snippet can only filter the instances which can be seen from the front camera only in the first frame of the whole tracking time. To ensure that the instances are seen from the front camera the whole time, we need to apply the following filter further. ``` print('Instance IDs: ') for i in cam_front_vehicles: instance = nusc.instance[i] first_token = instance['first_annotation_token'] last_token = instance['last_annotation_token'] current_token = first_token flag = True while current_token != last_token: if not belongs_to(current_token): flag = False break current_ann = nusc.get('sample_annotation', current_token) current_token = current_ann['next'] if flag: print(i, end=' ') ``` ##### Output: Instance IDs: 74 153 154 234 250 256 258 262 265 270 272 284 291 295 296 306 338 344 392 400 500 508 552 591 595 624 679 687 754 758 762 766 775 822 857 892 902 We loop through the tokens from the first annotation token to the last annotation token to retrieve in-between annotation tokens and check whether they were captured from the front camera. - Let's write some utility codes which will be helpful for later parts. First, we define a function `fig2np` which will convert a *matplotlib* figure to numpy array. ``` def fig2np(fig): ''' Converts matplotlib figure to numpy array ''' canvas = FigureCanvas(fig) width, height = fig.get_size_inches() * fig.get_dpi() canvas.draw() image = np.frombuffer(canvas.tostring_rgb(), dtype='uint8').reshape(int(height), int(width), 3) return image ``` - We will now write code for rendering 3D LIDAR data to the 2D plane by using some built-in functions of nuScenes. We will return numpy array of the rendered matplotlib figure. The code is self-explanatory, although I used the necessary comments. ``` def lidar2d(ann_token): ''' Combines bounding box of instance and LIDAR 2D data and returns numpy array of the rendered figure. ''' # We retrieve annotation record associated with this annotation token ann_record = nusc.get('sample_annotation', ann_token) # Now, retrieve sample record associated with the sample token of ann_token sample_record = nusc.get('sample', ann_record['sample_token']) # Now, we get LIDAR metadata from sample_record['data']['LIDAR_TOP'] and # retrieve binary file path of LIDAR data using nusc.get_sample_data() # We can also pass annotation tokens to visualize instances. # In our case, we have only one instance to visualize. lidar = sample_record['data']['LIDAR_TOP'] data_path, boxes, _ = nusc.get_sample_data(lidar, selected_anntokens=[ann_token]) # Declare matplotlib figure and axes for visualization fig, ax = plt.subplots(1, 1, figsize=(9, 9)) # We now send data_path to retrieve 3D LIDAR data, but we want to project # that onto 2D plane. render_height can in-built function of LidarPointCloud # does the job for us, we just need to pass data_path and matplotlib axis. LidarPointCloud.from_file(data_path).render_height(ax) # Let's just draw the instace boxes over the renderd LIDAR data. for box in boxes: c = np.array(nusc_explorer.get_color(box.name)) / 255.0 box.render(ax, colors=(c, c, c)) # prevent drawing axes plt.axis('off') # stop drawing %matplotlib agg # convert matplotlib figure to numpy array img = fig2np(fig) return img ``` - We can now choose one of our desired instances for visualization. Let's take instance index 153. We store all the images (as in numpy array) in a list `imgs`. ``` instance = nusc.instance[153] first_token = instance['first_annotation_token'] last_token = instance['last_annotation_token'] current_token = first_token imgs = [] while current_token != last_token: current_ann = nusc.get('sample_annotation', current_token) imgs.append(lidar2d(current_ann['token'])) current_token = current_ann['next'] ``` - Now that we have all our rendered images of instance index 153, we can animate the frames using the following code snippet. ``` def init(): img.set_data(imgs[0]) return (img,) def animate(i): img.set_data(imgs[i]) return (img,) fig = plt.figure(figsize=(9,9)) ax = fig.gca() img = ax.imshow(imgs[0]) anim = animation.FuncAnimation(fig, animate, init_func=init, frames=len(imgs), interval=500, blit=True) %matplotlib agg HTML(anim.to_html5_video()) ```  That's it! That's our desired output for today. ### 3. Conclusion ------ We learned how to use instances and categories. We are now able to render 2D projection of 3D LIDAR data. Also, we learned how to render annotations as 2D boxes which are originally annotated as the 3D cubes. In the next post, we will clear the clutter of the LIDAR data. Happy learning!
SHARE THIS POST
Facebook
Twitter
Google+
Related Posts
Ground Detection and Removal from Point Clouds
-
2020-06-14 13:59
Introduction to nuScenes: Dataset for Autonomous Driving
-
2020-05-25 14:42
"Dynamic Graph CNN for Learning on Point Clouds" Simplified
-
2019-07-12 21:04