All convolutions within a dense block are ReLU-activated and use batch normalization. Channel-smart concatenation is just attainable if the height and width dimensions of the data stay unchanged, so convolutions within a dense block are all of stride 1. Pooling layers are inserted between dense blocks for further dimensionality https://financefeeds.com/gcex-launches-open-api-to-optimize-trading-and-reporting/