To manually load the input data, deep learning processor IP core convolution and fully
connected module instructions, pretrained series network layer instructions, weights and
biases, and retrieve the output results use the compiler generated external memory address
map. Or, use the dlhdl.Workflow
workflow. The workflow generates the
external memory address map, loads the inputs, module instructions, layers instructions,
weights and biases, and retrieves the output results.
When you create a dlhdl.Workflow
object and use the
compile
method, an external memory address map is generated.
The compile
method generates these address offsets based on the deep
learning network and target board:
InputDataOffset
—Address offset where the input images are
loaded.
OutputResultOffset
— Output results are written starting at
this address offset.
SystemBufferOffset
— Do not use the memory address starting
at this offset and ending at the start of the
InstructionDataOffset
.
InstructionDataOffset
— All layer configuration (LC)
instructions are written starting at this address offset.
ConvWeightDataOffset
— All conv processing module weights
are written starting at this address offset.
FCWeightDataOffset
— All fully connected (FC) processing
module weights are written starting at this address offset.
EndOffset
— DDR memory end offset for generated deep
learning processor IP.
The example displays the external memory map generated for the logo recognition network
that uses the arria10soc_single
bitstream. Compile the dlhdl.Workflow object.