1 Introduction
In order to plan a safe maneuver, a selfdriving vehicle must predict the future motion of surrounding vehicles and pedestrians. Motion prediction is challenging in realistic city environments. In Figure 1
, we illustrate several challenges for pedestrian prediction. Gaussian distributions often poorly fit state posteriors (Fig.
1a). Further, pedestrians have inherently multimodal behavior, as they can move in arbitrary directions and have unknown and changing goals, each achievable with multiple trajectories (Fig. 1b). Even with strong evidence for a particular action, such as a road crossing, partially observed environments increase uncertainty in the timing of the action (Fig. 1c). However, a selfdriving vehicle motion planner needs actor predictions to be associated with time. Additional challenges include efficiently integrating spatial and temporal information, the mixed continuousdiscrete nature of trajectories and maps, and availability of realistic data.In the context of selfdriving, most prior work represents behaviors through trajectories. Future trajectories can be predicted with a recurrent neural network (RNN)
[alahi2016social, becker2018, gupta2018], a convolutional neural network (CNN) [luo2018fast, casas2018, bansal2018chauffeur], or with constant velocity, constant acceleration, or expertdesigned heuristics. However, a trajectory that minimizes the meansquared error with respect to the true path can only capture the conditional average of the posterior
[bishop1994mixture]. The conditional average trajectory does not represent all possible future behaviors and may even be infeasible, lying between feasible trajectories.To express multiple possible behaviors, a fixed number of future trajectories can be predicted [cui2018multimodal], or several can be sampled [gupta2018, sadeghian2019sophie]
. Still, in realistic environments, posterior predictive distributions are complex and a large number of samples are needed to capture the space of possibilities. Such models tradeoff prediction completeness and latency from repeated sampling. Further, the number of possible trajectories increases exponentially over long time horizons, and uncertainty grows rapidly.
Instead of predicting trajectories, in this work, we take a probabilistic approach, predicting distributions over pedestrian state at each timestep that can directly be used for costbased selfdriving vehicle planning. Conditioning on a spatiotemporal rasterization of agent histories aligned to the local map, we leverage deep convolutional neural network architectures for implicit multiagent reasoning, and mimic human dynamics through a discrete residual flow network, which we refer to as DRFNet. We summarize our contributions as follows:

[noitemsep,topsep=0pt]

We develop a deep probabilistic formulation of actor motion prediction that provides marginal distributions over state at each future timestep without expensive marginalization or sampling. Our discrete residual flow equation is motivated by autoregressive generative models, and better captures temporal dependencies than timeindependent baselines.

We propose the convolutional Discrete Residual Flow Network that predicts actor state over long time horizons with highly expressive discretized distributions.

We thoroughly benchmark model variants and demonstrate the benefit of belief discretization on a large scale, realworld dataset. We evaluate the likelihood, displacement error, multimodality, entropy, semantic mass ratio and calibration of the predictions, using a novel ModePool operator for estimating the number of modes of a discrete distribution.
2 Related work
Prior work on pedestrian prediction has largely modeled trajectories, goals, or highlevel intent.
Human trajectory forecasting The pedestrian prediction literature is reviewed in [ridel2018review, Rudenko2019HumanMT]. Multipedestrian interactions have been modeled via pooling [alahi2016social, gupta2018]
or game theory
[ma2017forecasting]. becker2018 predict future trajectories with a recurrent encoder and MLP decoder, reporting lower error than more elaborate multiagent schemes, and find that behaviors are multimodal and strongly influenced by the scene. Social GAN [gupta2018] is a sequencetosequence generative model where trajectory samples vary in speed and turning angle, trained with a variety loss to encourage diversity. However, the runtime of the sampling approach scales with the number of samples ( ms for trajectories), even without using a local map, and many samples are needed. SoPhie [sadeghian2019sophie] is another sampling strategy integrating external overhead camera imagery. In contrast, we predict entire expressive spatial distributions rather than individual samples and incorporate a local map into prediction.Goal directed prediction ziebart2009planning use historical paths to precompute a prior distribution over pedestrian goals indoors, then develop an MDP to infer a posterior distribution over future trajectories. wu2018probabilistic
use a heuristic to identify possible goal locations in a mapped environment and a Markov chain to predict the nexttime occupancy grid. Rehder et al.
[rehder2018pedestrian, rehder2015goal] use a twostage deep model to predict a Gaussian mixture over goals, then construct distributions at intermediate timesteps with a planning network. Still, the number of mixture components must be tuned, and the mixture is discretized during inference, which is computationally expensive. Fisac and Bajcsy [fisac2018, bajcsy2018] specify known goals for each human indoors, then estimate unimodal state distributions by assuming humans approximately maximize utility i.e. progress toward the goal measured by Euclidean norm. They estimate prediction confidence from model performance and return uninformative distributions at low confidence. Confidence estimation is complementary to our approach.Semantic map Pedestrian predictors have separately reasoned about spatially continuous trajectories and discretized world representations [ziebart2009planning, rehder2018pedestrian]. These works either ignore the semantic map or integrate it at an intermediate stage. In vehicle prediction, input map rasterizations are more widely used. IntentNet [casas2018] renders a bird’seye view of the world to predict vehicle trajectories and highlevel intention simultaneously, using a rasterized lane graph and a 2D convolutional architecture to improve over previous work [luo2018fast]. Similar map rasterizations are used in [bansal2018chauffeur, yang2018hdnet, djuric2018motion], and this work.
Related modeling techniques
The convolutional long shortterm memory (ConvLSTM) architecture has been applied to spatiotemporal weather forecasting
[xingjian2015convolutional]. A ConvLSTM iteratively updates a hidden feature map, from which outputs are derived. In contrast, DRFNet sequentially adapts the output space rather than a hidden state. Similarly, the adaptive instance normalization operator [huang2017arbitrary] uses a shared feature to predict and apply scale/shift parameters to a fixed, discrete image. Normalizing flows [rezende2015variational] apply a series of invertible mappings to samples from a simple prior, e.g. a Gaussian, constructing a random variable with a complex PDF. While normalizing flows transform individual samples, we directly transform a probability mass function (PMF) for computational efficiency.
3 Discrete Residual Flow Network
In this paper, we express beliefs over future pedestrian positions through categorical distributions that discretize space. Such distributions can be used for costbased planning or constrained path optimization in selfdriving vehicles. In this section, we explain how we represent historical observations as a multichannel image encoding both the known map and detected actors, a process we call rasterization. We then introduce a backbone deep neural network which extracts features from the rasterized image, followed by the probabilistic framework for our DRFNet. Finally, we introduce our DRF head which uses the extracted features for prediction.
Encoding Historical Information Future pedestrian actions are highly correlated with historical actions. However, actions are also influenced by factors such as road surface types, traffic signals, static objects, vehicles, and other pedestrians. We rasterize
all semantic map information and agent observations into a 3D tensor, encoding both spatial and temporal information by automatic rendering. The first two dimensions correspond to the spatial domain and the third dimension forms channels. Each channel is an
px image encoding specific local bird’s eye view (BEV) information at a resolution of px per meter. Figure 3 shows an example rasterization from a real urban scene.Dynamic agents are detected from LiDAR and camera with the object detector proposed in liang2018deep
, and are associated over time using a matching algorithm. Resulting trajectories are refined using an Unscented Kalman Filter
[wan2000unscented]. DRFNet renders detected pedestrians in each timestep for the past seconds in channel and detected nonpedestrians (e.g. vehicles) in channel . To discriminate the pedestrian of interest from other actors, a grayscale image masks their tracklet.DRFNet renders the local map in a similar fashion to [casas2018], though centers the map about the PoI. semantic map channels finely differentiate urban surface labels. These channels mask crosswalks, drivable surfaces, traffic light state, signage, and detailed lane information. Maps are annotated in a semiautomated fashion in cities where the selfdriving vehicle may operate, and only polygons and polylines are stored. The final rasterization is where indicates concatenation along the channel dimension, the subscript indicates a collection of elements from all past timesteps and is the last timestep. All channels are rotated such that the currently observed PoI is oriented toward the top of the scene.
Backbone Network DRFNet uses a deep residual network with convolutional layers (ResNet18) [he2016deep] to extract features from the rasterization . We extract feature maps at , , and of the input resolution from ResNet18. These multiscale intermediate features are upscaled and aggregated into a resolution global context with a feature pyramid network (FPN) [lin2017feature].
Probabilistic Actor State Prediction We now introduce a probabilistic formulation of future actor state prediction. Given rasterization , we are interested in inferring a predictive posterior distribution over possible spatial locations of the PoI for each timestep where
. Instead of treating the state as a continuous random variable, we discretize space to permit a onehot state encoding. Specifically, we divide space into a grid with
bins. The state at time ,, is a discrete random variable which takes one of the
possible bins.Consider the joint probability of the states in the future timesteps, i.e., . This distribution can be modeled with several factorizations. The first and the most straightforward factorization assumes conditional independence of future timesteps,
(1) 
We can use a neural network, e.g., a CNN, to directly model . In Section 4.3, we show the performance of a mixture density network and fullyconvolutional predictor that simultaneously predict these factors. Still, conditional independence is a strong assumption. The second factorization follows an autoregressive fashion, providing the foundation for many models in the literature,
(2) 
For example, recurrent encoderdecoder architectures [alahi2016social, becker2018, gupta2018] sample trajectories one state at a time and capture the conditional dependencies through a hidden state.
In contrast to the samplebased approach, often we desire access to compact representations of for a particular , such as an analytic form or a discrete categorical distribution. As we always condition on , we refer to
as a marginal distribution. Access to the marginal provides interpretability, parallel sampling and ease of planning as the marginals can be used as occupancy grids. However, direct marginalization is expensive if not intractable as we typically have no simple analytic form of the joint distributions. Approximation is possible with Monte Carlo methods, though many samples are needed to characterize the marginal.
Instead, we propose a flow
between marginal distributions that resembles an autoregressive model in its iterative nature, but avoids sampling at each step. In contrast to a normalizing flow
[rezende2015variational], which approximates a posterior over a single random variable by iteratively transforming its distribution, discrete residual flow transforms between the marginal distributions of different, temporally correlated random variables by exploiting a shared domain.Discrete Residual Flow Our model recursively constructs from ,
(3) 
where we refer to the second term on the right hand side as the residual. is a subnetwork with parameter called the residual predictor that takes the marginal distribution and context as input, and predicts an elementwise update that is used to construct the subsequent marginal distribution . is the normalization constant to ensure
is a valid distribution. Note that the residual itself is not necessarily a valid probability distribution.
Eq. (3) can be viewed as a discrete probability flow which maps from the distribution of to the one of . We use deep neural networks to instantiate the probability distributions under this framework and provide a derivation of Eq. (3) in the appendix, Section 6.5.
For initialization, is constructed with high value around our PoI position and nearzero value over other states. In implementation, the residual predictor is a convolutional architecture that outputs a 2D image, a compact and convenient representation as our states are spatial. This 2D image is queryable at state via indexing, as is the updated marginal. Additionally, in implementation, we normalize all marginals at once and apply residuals to the unnormalized potential ,
(4) 
Figure 2 illustrates the overall computation process. The embedding of the rasterization is shared at all timesteps, used by each residual predictor. Figure 4 further illustrates the architectural details of the DRF residual predictor for one timestep.
Learning We perform learning by minimizing the negative log likelihood (NLL) of the observed sequences of pedestrian movement. Specifically, we solve the following optimization,
(5) 
where the expectation is taken over all possible sequences and will be approximated via minibatches. where denotes the parameters of the backbone network.
4 Evaluation
There is not a standard dataset for probabilistic pedestrian prediction with realworld maps and dynamic objects. Thus, we construct a largescale dataset of real world recordings, object annotations, and online detectionbased tracks. We implement baseline pedestrian prediction networks inspired by prior literature [saleh2018long, zyner2018naturalistic, rehder2018pedestrian] and compare DRFNet
against these baselines on standard negative log likelihood and displacement error measures. We propose an evaluation metric for measuring prediction multimodality, which is one of the most characteristic properties of pedestrian behavior. We also analyze the calibration, entropy and semantic interpretation of predictions. Finally, we present qualitative results in complex urban scenarios.
4.1 Dataset
Our dataset consists of 481,927 ground truth pedestrian trajectories gathered in several NorthAmerican cities. The dataset is split into 375,700 trajectories for training, 34,571 for validation, and 71,656 heldout trajectories for testing. Dynamic objects are manually annotated in a , m range view from an onvehicle LiDAR sensor. Annotations contain s ( frames) of past observations and s ( frames) of the future. These Hz, s sliding windows are extracted from longer logs.
We also finetune and evaluate DRFNet with variable length trajectories from an object detector in the same scenarios. The detector is discussed in Section 3. This assesses realworld, onvehicle prediction performance, reflecting the challenges inherent to real perception such as partial observability, occlusion and identity switches in tracking algorithms. While PoIs are annotated for a full seconds in our ground truth experiments, realistic tracks are of variable length. A selfdriving vehicle must predict the behavior of other agents with a very limited set of observations. Thus, we evaluate DRFNet by predicting seconds ( frames) into the future, given tracks with as few as
historical frames, sufficient for estimating acceleration. Relaxing the requirements about past history avoids skewing our dataset toward easily tracked pedestrians, such as stationary agents.
4.2 Baselines
In this section, we describe two baseline predictor families. These baselines are trained endtoend to predict distributions given features produced by the same backbone as our proposed model.
Negative log likelihood (NLL)  ADE (m)  FDE (m)  Mass Ratio (%)  

Model  Mean  @ 1 s  @ 3 s  @ 10 s  0.210s  @ 1 s  @ 3 s  @ 10 s  Acc.  Recall 
Density Net  5.39  2.87  3.96  6.74  3.49  0.93  1.72  7.66  77.99  81.33 
MDN4  3.01  1.64  2.00  4.33  1.47  0.38  0.69  3.38  87.85  84.12 
MDN8  3.43  1.60  2.77  4.79  1.78  0.60  0.88  3.91  85.56  84.19 
ConvLSTM  2.51  0.89  1.86  4.07  1.58  0.47  1.06  3.20  88.02  85.02 
DRFNet  2.37  0.76  1.74  3.83  1.23  0.35  0.62  2.71  89.78  85.41 
Mixture Density Networks (MDNs)
represent a conditional posterior over continuous targets given continuous inputs with a fullyconnected neural network that predicts parameters of Gaussian mixture model
[bishop1994mixture]. For a baseline, we implement a variant of this architecture that models pedestrian posteriors at multiple time horizons, conditioned on the past history and current location. Inspired by rehder2018pedestrian, we generate theth mixture component from the neuron outputs
which are then reparameterized as to obtain the mean , covariance matrix and the responsibility of the mixture :(6) 
Training MDNs is challenging due to a high sensitivity to initialization and parameterization. To avoid numerical instabilities, the minimum standard deviation is
. Even with a careful initialization and parameterization, training can be unstable, which we mitigate by discarding abnormally large losses. Note that rehder2018pedestrian stabilized training by minimizing only the minimum of the batchwise negative log likelihood. Minimizing this minimum loss leads to a good performance on easy examples, but catastrophic performance on hard ones. Lastly, conversions from a discretized spatial input to a continuous output can be challenging to learn [liu2018intriguing], a problem that our proposed DRFNet avoids via a discretized output that is spatially aligned with the input.ConvLSTM In contrast to our DRFNet that recursively updates output distributions in the logprobability space, one can also recurrently update hidden state using a Convolutional LSTM [xingjian2015convolutional] that observes the previous prediction. Output distributions are then predicted from the hidden state.
4.3 Results
We evaluate negative log likelihood (NLL) at short and long prediction horizons, where lower values indicate more accurate predictions, as well as the mean NLL across all 50 future timesteps. In Table 1 and 2, we present results on the heldout test set for ground truth annotated logs and tracked, realworld detections, respectively. Our proposed DRFNet achieves a superior likelihood over the baselines by introducing a discrete state representation and a probability flow between timesteps.
Likelihood on ground truth tracks In order to evaluate our results under perfect perception, we benchmark on ground truth (annotated) pedestrian trajectories. Table 1 shows that our proposed model reduces the mean NLL by when compared to the best performer among the MDNs and by with respect to the ConvLSTM baseline. This corresponds to a
increase in geometric mean likelihood compared to the best MDN and to a
increase when compared to the ConvLSTM.Likelihood on online tracks Under online, imperfect perception, DRFNet achieves a reduction of in mean NLL over the best MDN and over ConvLSTM, i.e. a and a increase of the geometric mean likelihood of the future observed pedestrian positions, respectively (Table 2). DRFNet’s sequential residual updates may regularize and smooth predictions despite perception noise. Adding more than components to the density networks does not reduce NLL. Directly predicting occupancy probability over a grid delivers stronger performance than discretizing a continuous spatial density. Using an explicit memory with hidden state updates (ConvLSTM) also has inferior performance to our proposed flow between output distributions.
Ground truth data (NLL)  Real detection data (NLL)  

Ablative model variant  Mean  @ 1 s  @ 3 s  @ 10 s  Mean  @ 1 s  @ 3 s  @ 10 s 
Independent, categorical (Fully conv)  2.45  0.80  1.83  3.89  3.06  1.49  2.46  4.45 
+ Sequential refinement (DRR)  2.40  0.80  1.78  3.83  3.02  1.49  2.44  4.40 
Discrete Residual Flow  2.37  0.76  1.74  3.83  2.98  1.47  2.39  4.36 
Displacement error We compute the expected root mean squared error, or expected displacement error, between the ground truth pedestrian position and model predictions. This is approximated by discretizing posteriors, computing the distance from each cell to the ground truth, and taking the average weighted by confidence at each cell. Table 1 reports the error in meters, averaged over 50 timesteps (ADE) and at specific horizons (FDE). DRFNet significantly outperforms all baselines.
Model calibration To understand overconfidence of predictive models, we compute calibration curves and expected calibration error (ECE) on the ground truth test set according to Guo:2017:CMN:3305381.3305518
by treating models as multiway classifiers over space. ECE measures miscalibration by approximating the expected difference between model confidence and accuracy. DRF has the lowest calibration error, with accuracy closest to the model confidence on average, as shown in Fig.
5. While somewhat overconfident, these models could be recalibrated with isotonic regression or temperature scaling.Multimodality and Entropy Analysis We propose a ModePool operator to estimate the number of modes of a discrete spatial distribution. ModePool approximates the number of local maxima in a discrete distribution as follows, where the max is taken over , i.e. windows:
(7) 
Only local maxima with mass exceeding a threshold are counted. ModePool is efficiently implemented on GPU by adapting the MaxPool filter commonly used in CNNs for downsampling. In Figure 6b, modality is estimated with , . Given our output resolution, at most one mode per m area can be counted. While the baseline MDN4 predicts multiple Gaussian distributions, we observe strong modecollapse. In contrast, DRF produces predictive posteriors that have increasingly multimodal predictions over horizons. Though an MDN of 8 mixtures captures some multimodality as well, the mean number of modes is highly inconsistent over time (6b, middle).
Fig. 6c shows the mean entropy of the predicted distributions. Entropy for DRFNet is the lowest. As DRFNet also achieves lower NLL at all future horizons (6a), DRFNet
predictions can be interpreted as low bias and low variance. We combine entropy and modality into a single metric in Fig.
6d. For the discrete heads (DRF, ConvLSTM), the entropy per mode saturates. These models capture inherent future uncertainty by adding distributional modes e.g. high level actions rather than increasing permode entropy. This is not the case for baselines, where entropy per mode grows over time. Qualitatively, in Fig. 7, DRFNet predictions remain the most concentrated over long horizons.Semantic mass ratio Our semantic map can partition the world into three disjoint highlevel classes, . To interpret how well models understand the map, we measure confidenceweighted semantic accuracy, the mean predicted mass that falls on the correct map class. We also measure safetysensitive recall, the mean mass that falls into a drivable region when the PoI is in a drivable region—performance when a PoI is onroad is very important to a selfdriving car. Let be the class of location x, determined by the map, and be the ground truth class of the PoI position at time . Then, we compute metrics as follows, reported in Table 1:
(8)  
(9) 
where , the safetysensitive timesteps. DRFNet significantly outperforms baselines on semantic mass ratio metrics, and most accurately predicts the type of surface the PoI will traverse. This suggests that DRFNet better uses the map, and is qualitatively reflected by lowentropy, concentrated mass within map polygons in Figure 7.
Ablation Study We conduct an ablation study that evaluates the value of discrete predictions and our residual flow formulation. We study two variants of the DRF prediction head, a fully convolutional and a discrete residual refinement (DRR) head. MDNs predict continuous mixtures of Gaussians assuming conditional independence of future states, which can be discretized for costbased planning. We can instead directly predict independent discrete distributions. The fully convolutional predictor projects the spatial feature (Section 3
) into a 50channel space representing pertimestep logits with a 1x1 convolution on scene features. Spatial softmax produces valid distributions over the discrete spatial support. The DRR head takes as input the discrete probability distributions output by our fully convolutional predictor and sequentially predicts pertimestep residuals in logspace with pertimestep weights. DRR thereby refines independent predictions sequentially.
Table 3 shows that state space discretization and categorical prediction (fully convolutional head) has significantly better NLL than the best continuous mixture model in Table 1, a reduction in NLL. Sequential refinement of independent predictions using DRR improves performance. However, predicting flow in the log probability space with DRF achieves the best likelihood.
Qualitative Results Figure 7a shows predictions for a pedestrian in a challenging precrossing scenario. Predictive posteriors modeled by DRFNet (4th row) express high multimodality and concentrated mass, with three visible highlevel actions: stopping, crossing straight, or crossing while skirting around a car. DRFNet also exhibits strong map interactions, avoiding parked vehicles. However, MDNs predict highly entropic, unimodal distributions, and the ConvLSTM places substantial spurious mass on parked vehicles. Across other test scenes, we observe that DRFNet constructs lowentropy yet multimodal predictions with similarly strong map and actor interactions. In Figure 7d, DRFNet is the only model to correctly predict a crosswalk approach. Still, in failure cases, all models predict crossings too early, possibly due to unknown traffic light state. This could lead to more conservative selfdriving vehicle plans if the pedestrians were nearby. Nonetheless, these pedestrians and lights are distant.
5 Conclusion
In this paper, we develop a probabilistic modeling technique applied to pedestrian behavior prediction, called Discrete Residual Flow. We encode multiactor behaviors into a bird’s eye view rasterization aligned with a detailed semantic map. Based on deep convolutional neural networks, a probabilistic model is designed to sequentially update marginal distributions over future actor states from the rasterization. We empirically verify the effectiveness of our model on a large scale, realworld urban dataset. Extensive experiments show that our model outperforms several strong baselines, expressing high likelihoods, low error, low entropy and high multimodality. The strong performance of DRFNet’s discrete predictions is very promising for costbased and constrained robotic planning.
Acknowledgments
We would like to thank Abbas Sadat for useful discussions during the development of this research.
References
6 Appendix
In this appendix, we provide additional implementation (Section 6.16.2), training (Section 6.3) and evaluation (Section 6.4) details for our proposed DRFNet and baseline architectures. We also provide a derivation of the DRF update equation (Section 6.5).
6.1 Backbone network
In Section 3 of the paper, we described a deep convolutional neural network architecture that represents our spatiotemporal scene rasterization as a global feature . This CNN architecture forms the initial layers of the proposed model and baselines, though each network is trained endtoend (backbone parameters are not shared across models). The backbone architecture is detailed in Figure 8, below. The proposed DRFNet further projects the feature into a channel space with a learned convolutional filter for memory efficiency.
6.2 Rasterization
Rasterization dimensions
The input bird’s eye view (BEV) region is rotated for a fixed pedestrian of interest heading at the current time and spans meters perpendicularly and meters longitudinally, ahead and behind the last observed pose of the pedestrian. We set the input resolution to meters per pixel and the output resolution of our spatial distribution to meters per pixel. At the input resolution, our BEV rasterization channels are each px by px.
Encoding observed actor behavior We use the object detector proposed in liang2018deep, which exploits LiDAR point clouds as well as cameras in an endtoend fashion in order to obtain reliable bounding boxes of dynamic agents. Further, we associate the object detections using a matching algorithm and refine the trajectories using an Unscented Kalman Filter [wan2000unscented]. These detections are rasterized for past timesteps, with ms elapsing between timesteps. At any past time , DRFNet renders a binary image for pedestrian occupancy where pixel when pixel lies within a convex, bounding octagon of a pedestrian’s centroid. Other cells are encoded as . Bounding polygons of vehicles, bicycles, buses and other nonpedestrian actors are also rendered in a binary image . In Figure 3c and Figure 8, we show how temporal information is encoded in the channel dimension of tensors and .
To discriminate the pedestrian of interest (PoI) from other actors, a grayscale image masks the tracklet of the pedestrian to be predicted. As a convention, let the current timestep be . If a pixel is contained within the bounding polygon of the PoI at timestep , then , . By doing so, the whole PoI tracklet is encoded in a single channel with decaying intensity for older detections. This encoding allows for variable track lengths. All rasterization channels are rotated for fixed PoI orientation at . We compute orientation with the difference of the last two observed locations.
To allow the network to localize objects in the rasterization, two additional positional encoding channels encode and coordinates as real values from to , with value at the last known PoI location. Similar channels are used in [liu2018intriguing].
Encoding semantic map To represent the scene context of the pedestrian, DRFNet renders map polygons into semantic map channels, collectively denoted as , where each channel corresponds to a finely differentiated urban surface label. Crosswalks and drivable surfaces (roadways and intersections) are rasterized into separate channels. While sidewalks are not explicitly encoded, nondrivable surfaces are implied by the road map. Three channels indicate traffic light state, classified from the onvehicle camera with a known traffic light position: the green, red, and yellow light channels each fill the lanes passing through intersections controlled by the corresponding light state. Similarly, lanes leading to yield and stop signs are encoded into channels. Finally, we encode other detailed lanes, such as turn, bike, and bus lanes, and a combined channel for all lane markers. In detail, the 15 channels are as follows:

Aggregated road mask, masking all drivable surfaces

Masked crosswalks

Masked intersections

Masked bus lanes

Masked bike lanes

All lane markers / dividers

Masked lanes leading to stop sign

Masked lanes leading to yield sign

Lanes controlled by red stop light

Lanes controlled by yellow light

Lanes controlled by green light

Lanes without a turn

Rightturn lanes

Protected leftturn lanes

Unprotected leftturn lanes
This information is annotated in a semiautomated fashion in cities where the selfdriving vehicle may operate (Section 4.1), and only polygons and polylines are stored.
6.3 Training
Computing negative log likelihood For density visualization in Figure 7, and for computing discrete negative log likelihood metrics in Table 1, the MDN predicted mixture is numerically integrated by a centered approximation with sampling points for each output grid cell of size squared meters. Discretizing the MDN allows an NLL metric to be compared between continuous predictions and discrete predictions.
Optimization In our experiments with manually annotated trajectories, we train our models from scratch using the Adam optimizer [Kingma2015AdamAM] with a learning rate of . When using trajectories from a real perception system, we finetune the models learned using the ground truth data to better deal with missing pedestrians and detector/sensor noise. Each training batch includes pedestrian trajectories. All experiments are performed with distributed training on GPUs.
6.4 Metrics
Measuring modality To compute the number of modes (local maxima) in a distribution, we proposed the ModePool operator. Our proposed operator in fact overestimates modality for MDNs, especially for the Density Network, at short timescales due to quantization error and the fixed window size. To compute modality of a continuous distribution, we discretize the distribution. When the distributions are very long and narrow, as in Density Network short term predictions, multiple modes can be registered. Despite this overestimation, models with the proposed discrete prediction space (ConvLSTM, DRFNet) expressed higher multimodality than the MDNs.
6.5 Derivation of Discrete Residual Flow
We derive Equation (3
), the discrete residual flow update equation, as an approximation for explicit marginalization of a joint state distribution. According to the law of total probability,
(10)  
(11) 
Equation (11) can be seen as a recursive update to the previous timestep’s state marginal. Recall that is a categorical random variable over bins. Instead of representing the pairwise conditional distribution and conducting the summation once per output bin at cost per timestep, we approximate (11) with a pointwise update,
(11)  
(12)  
(13) 
where is a normalization constant, and is a parametric approximator for the summation that we refer to as the residual predictor. In principle, a sufficiently expressive residual predictor can model the summation exactly. While the residual is applied as a scaling factor in Equation (13), the residual becomes more natural to understand when the recursive definition is expressed in log domain, completing the derivation,
(14) 
We construct such that it can be computed in parallel across all locations , and such that the update to is an elementwise sum followed by normalization. In DRFNet, is instantiated with a neural network that outputs a 2D image indexable at these locations (Figure 4). Then, the update (14) incurs cost per timestep.
With this lens, the baseline fully convolutional predictor and the mixture density networks, which assume conditional independence , directly approximate the marginal:
(15) 
The baseline ConvLSTM propagates a cell and hidden state between steps and shares parameters of the predictor, without sampling from intermediate marginals:
(16)  
Discrete residual flow retains most of the benefits of the independence assumption, i.e. tractable marginal distribution estimation and parallelizability, with update more closely resembling a Markov chain. However, there is no sampling between timesteps. As we established in our ablation study (Table 3), applying DRF Eq. (14) outperforms the baseline fully convolutional predictor according to Eq. (15) and the ConvLSTM update, Eq. (16).
Comments
There are no comments yet.