This tutorial explores the use of Abstract Shells in Dynamic Function eXchange (DFX) on the ZuBoard 1CG, built around the Zynq UltraScale+ MPSoC architecture using the Vivado Design Suite and Vitis toolchain. It presents a complete end-to-end flow, from hardware design and DFX partitioning in Vivado using Block Design Containers, to generating full and partial bitstreams and deploying a Vitis application that performs runtime partial reconfiguration on the board. To the best of my knowledge, there are few publicly available tutorials that demonstrate this complete DFX and Abstract Shell flow from hardware creation to runtime reconfiguration on an actual board.
The primary focus is on understanding how Abstract Shells DFX can accelerate development by reducing implementation time for reconfigurable modules, enabling modular and team-based workflows, and supporting IP protection. A structured and script-driven methodology is introduced to automate bitstream generation and simplify handoff of shell interfaces to teams developing new reconfigurable modules. While demonstrated using a simple design on the ZuBoard 1CG, the flow can be extended to other supported UltraScale+ devices, providing a scalable foundation for building more advanced and complex DFX systems.
These are the tools used to run this tutorial. If you want to exactly reproduce the findings in this tutorial, it is recommended to use the 2025.1 version of the tools, but it should be easy to adapt to different version of the tools.
Vivado 2025.1
Vitis 2025.1
ZuBoard 1CG
Check this Getting started tutorial on installing the tools specifically for the ZuBoard 1CG. Though this tutorial shows for tool v2023.1, the steps should be similar for installing 2025.1 and newer.
Remember to include the devices that you would be planning to implement the project during installation. With respect to this tutorial, we are targeting the ZuBoard 1CG which has Zynq Ultrascale+ MPSoC (part_name="xczu1cg-sbva484-1-e") as shown in the image below.
All the source code used in this tutorial are available in this respository:
git clone https://bitbucket.org/prajithrg/dfx_abstract_shell.git
DFX Dynamic Function eXchange
BDC Block Design Container
RM Reconfigurable Module
RP Reconfigurable Partition
XSA Xilinx Support Archive
As with most features in the Vivado Design Suite, Dynamic Function eXchange (DFX) operations in the GUI are executed through underlying Tcl commands. You can review the exact Tcl commands executed by opening the Vivado journal file (File → Project → Open Journal File). Projects can also be exported as reusable Tcl scripts using the Write Project Tcl option. For detailed syntax and usage, each Tcl command provides built-in help via the -help option. In this tutorial, the complete Vivado design flow is automated using Tcl scripts and they are available under dfx_abstract_shell/tcl/vivado/ directory. The key scripts found here are as follows:
common/
board.tcl - The design supplied here targets the Avnet-tria ZUBoard 1CG with part number xczu1cg-sbva484-1-e. By modifying this, the design should work for other boards.
paths.tcl - Initializes the projects and artifacts paths used during script runs
clean.tcl - Cleans the folders generated from running the scripts
parent_dfx/
01_create_top_bd.tcl – Creates the static block design and all IP for a completely flat design.
02_create_rp1_bdc.tcl - Creates two levels of hierarchy and converts one to be a block design container.
03_enable_dfx_bdc.tcl - Turns the standard BDC into a DFX BDC
04_create_rp1rm2.tcl - Creates a new RM for the existing DFX BDC.
05_make_top_wrapper.tcl - Creates a new RM for the existing DFX BDC
06_create_dfx_configs.tcl - Sets up DFX design runs for both Standard DFX and Abstract Shell runs.
07_run_impl_1.tcl - Launches OOC synthesis of RMs, parent synthesis, parent implementation and child implementation and generates bitstreams.
08_export_xsa_bit.tcl - Exports the hardware platform XSA and partial bitstreams for Vitis development
09_write_abstract_shell_rp1.tcl - Writes the Abstract Shell for the target RP
10_export_rm_handoff.tcl - Exports required files for RM handoff to another team
run_all_parent_dfx.tcl - Combines the above scripts to automate the parent Abstract shell DFX flow.
reconfig_modules/
01_create_rm3_bd_prj.tcl - Creates a separate RM project to mimic modular development
02_run_synth_rm3.tcl - Synthesizes this RM design
03_generate_rm_partial_bit.tcl - Links the RP abstract shell from the parent DFX design with the newly synthesized RM design
run_all_rm.tcl - Combines the above scripts to automate the child Abstract shell DFX flow.
Other folders inside the root dfx_abstract_shell repository:
constraints/ - Contains XDC constraint files used for floorplanning, partition definition, and board-level pin assignments
pblocks.xdc - Defines the physical regions (pblocks) for RPs used in the DFX floorplan.
rgb1.xdc - Provides board-specific pin constraints for the RGB1 interface on the ZuBoard 1CG.
src/ - Contains vitis application program for runtime partial reconfiguration
The output directories produced from running these scripts are collected under the root directory dfx_abstract_shell are as follows:
artifacts/
bitstreams/ - Stores the generated full and partial bitstreams
checkpoints/ - Stores the post synthesis netlist and routed checkpoints
handoff_rm/ - Stores the reference block diagram and stub interface to be handed over for RM development
xsa/ - Stores the generated hardware archive to be handed over for Vitis application development
projects/ - Stores the generated Vivado parent DFX and RM projects
This tutorial can be run completely via scripts in Vivado Tcl shell. Each sub-section can also be run individually using the corresponding Tcl scripts listed above. The tutorial as described will manually run through most steps to show what is happening throughout the flow. Sub-section scripts will be noted along the way. This tutorial does not show iterative interaction with the IP integrator design which is out of the scope of this tutorial.
This section describes the steps for creation and compilation of a complete hardware design through Vivado using IP integrator and BDCs.
Open the Vivado Tcl console. In the Tcl Console, navigate to the folder dfx_abstract_shell/tcl/vivado folder.
If you prefer using CLI, Vivado Tcl console can be obtained by running: vivado -mode tcl
Using Vivado UI, the Tcl consle is available as a bottom tab.
Source the first Tcl script (01_create_top_bd.tcl) to create a flat version of the design that will target the ZuBoard 1CG.
source 01_create_top_bd.tcl
This script performs a few tasks:
Creates a new project for a ZuBoard 1CG target
Adds and customizes a collection of IPs
Connects the IP within the block design
Validates and saves the block design
The 01_create_top_bd.tcl script was generated from an existing block design by Write Project Tcl option in Vivado.
In this section, you will split the design into two hierarchical instances. The creation of a level of hierarchy for the static part of the design is not required; this is done simply to organize the design and focus attention on the dynamic region but could also be used for easier floorplanning for implementation if desired. The level of hierarchy for what will be the RP, however, is required, as this will be converted to a BDC.
Follow the instructions below or run the Tcl script which automates the steps.
source 02_create_rp1_bdc.tcl
Right-click on the axi_gpio_1 instance and select Create Hierarchy. This is the GPIO IP connected to the DFX Decoupler IP.
In the resulting dialog box, name the hierarchy rp1 and click OK.
Select the ilconstant_1 instance and drag and drop it into the rp1 hierarchy instance.
Right-click on the Zynq UltraScale+ MPSoC instance and select Create Hierarchy.
In the resulting dialog box, name the hierarchy static_region and click OK.
One by one, select all remaining instances (other than rp1) and drag and drop them into the static_region level of hierarchy. The resulting block design can be cleaned up by running Regenerate Layout, and each level of the hierarchy can be collapsed. The resulting block design should look like as below.
Right-click on the canvas to select Validate Design, then save the block design.
Now that levels of hierarchy are established, the rp1 instance can be converted to a BDC, which will represent the RP.
Right-click on the collapsed rp1 instance and select Create Block Design Container.
Name the container rp1rm1 and click OK.
This will convert the hierarchical instance into a block design container. The level of the hierarchy is labeled rp1rm1.bd and the block contains an icon that looks like a pyramid of six rectangles.
In the Sources window, you will see a new block design rp1rm1 being added to the project.
This action has created a new block design for the rp1 submodule. If you expand the rp1 instance in the design_1 block design, you will see that you cannot edit the design at that level. This is a read-only copy, so to edit the design you must open the source rp1rm1.bd block design from the Sources view. This is not necessary now.
Modify the Address information for the rp1rm1 instance by selecting the Address Editor window for the top-level block design. Completely expand the information for Network 0 then modify the range for /rp1/axi_gpio_1/S_AXI by changing it to 64K.
Return to the diagram, right-click and select Validate Design. After validation completes, click Save to save the block design.
The design as it stands now is still a standard IP integrator project but with two block designs instead of only one. The block design container feature in IP integrator allows you to add multiple design sources for the rp1 hierarchical instance, enabling changes through the use of multiple design revisions, or allowing for team design by sharing submodule block designs with team members.
In this section, you will enable the DFX capabilities within IP integrator and add new RMs in the rp1 BDC.
Follow the instructions or source 04_enable_dfx_bdc.tcl to automate the steps.
Note: This action is irreversible. Once a project is converted to a DFX project, it cannot be changed back. The design runs infrastructure and all the DFX-centric settings are expected from this point forward, and DRCs are enabled to keep users on the correct path. It is recommended that designs be archived before this conversion to save a non-DFX version.
Select Tools > Enable Dynamic Function eXchange to expose DFX features within the Vivado IDE. Select the Convert option in the dialog box that opens.
Once this step has been run, you will see new menu items appear, most notably the Dynamic Function eXchange Wizard in the Flow Navigator and under the Tools menu.
Note: If the project is not explicitly converted by the user, it will be automatically done when the block design is generated later in the flow, based on the DFX setting on the block design container. Even if the conversion is automatic, it is still irreversible.
In the design_1 diagram, double click on the rp1 instance to edit the block design container.
Under the General tab, check both the Enable Dynamic Function eXchange on this container and the Freeze the boundary of this container options.
Checking the Enable Dynamic Function eXchange on this container option defines the rp1 instance to be a RP. Freezing the boundary of this container prevents parameter propagation across the boundary interface.
Click the Addressing tab to see the aperture for this block design container. The Address Offset is 0xA000_000 and the Address Range is 64K, matching the information supplied in rp1rm1. Check the Show Detailed View to see that the aperture for rp1rm1 matches the general aperture for rp1 overall. No changes are necessary at this point; this tab will be revisited later.
Click OK to save the changes and return to the design_1 diagram. You will see that the icon on the rp1 block design container has changed to show a “DFX” label.
Click Validate Design then Save to save the design.
DFX would not be very compelling without multiple RMs to swap between, so the next step is to create a new RM for the RP that now exists. Follow the instructions below or source 04_create_rp1rm2.tcl to automate the steps.
Right-click on the rp1 instance and select Create Reconfigurable Module. In the dialog box that opens, give the RM a name of rp1rm2 and click OK.
A new block design is created and opened. The diagram consists of three input and one output ports, which are the same port list as the first RM for the rp1 partition.
Note: The port list for each RM for a given RP must be identical, even if not all of the ports are used by each RM. Note that in the log (and script) the create_bd_design command uses the -boundary_from_container option, copying the explicit port list from the block design container.
Add a new IP to the canvas by clicking the + icon and using the search field to find the AXI GPIO IP. Add it to the canvas, then double-click to customize. Check the All Inputs box for GPIO, ensure the GPIO Width is set to 32, Enable Dual Channel, ensure the GPIO 2 Width is set to 3, and Default Output Value set to 0x2; then OK to return to the canvas.
Click the + again and use the search field to add a Constant IP to the canvas. Double-click to customize. Change the Const Width to 32 and Const Val to 0xFACEFEED. Click OK to accept the edits.
Connect the pins to create the diagram as shown in following figure. Note: You will need to expand the GPIO port to expose the 32-bit input bus to match the type of the Constant dout bus. Regenerate the layout to make it look nice.
Change to the Address Editor tab and note that no addresses have been assigned. Right-click on the row for /axi_gpio_0/S_AXI and select Assign. This sets a 64K range starting at address 0x4000_0000.
Modify the Master Base Address so it starts at 0xA001_0000 then keep the Range at 64K.
Validate and save the rp1rm2 block design. In this simple design, there are three differences between rp1rm1 and rp1rm2:
The S_AXI base addresses are different: This will be used to show that device tree overlays must be created and managed for designs that might have different requirements between RMs.
The constant values that can be read via GPIO are different: This will be used to confirm that dynamic reconfiguration in hardware has been done successfully.
The default value of the GPIO out is set to (0x2) GREEN to visually confirm from the board that dynamic reconfiguration in hardware has been done successfully.
Ensure that each RM has the appropriate aperture for its AXI slave interface, aligned to the instance in the top level. This is automatically done but can be manually set if desired.
In the top-level block design, double click on the rp1 block design container. Switch to the Addressing tab and click the Show Detailed View checkbox.
You can see that the overall aperture for rp1 for the S_AXI port starts at address 0xA000_0000 and has an overall range of 128K. This is automatically calculated by collecting address information from each design source in the block design container and summarizing each module’s requirements.
If the aperture must be expanded to include new RMs that have not been created yet, toggle the Mode from Auto to Manual and edit the master Offset or Range. For example, in the following sections, we will be using Abstract Shell configuration in DFX to build a third RM and using the manual mode, the aperture can be explanded to 192KB as shown below.
Click OK to return to the top-level block design.
Validate and save design_1.bd.
The next step before processing through synthesis and implementation is to create an HDL wrapper for the top-level block design, then generate targets for synthesis.
Follow the instructions below or source 05_make_top_wrapper.tcl to automate the steps.
In the Sources window, right-click on design_1.bd and select Create HDL Wrapper. Keep the Let Vivado Manage option selected and click OK. In the Sources window, design_1_wrapper.vhd has been created and added to the project. This HDL file instantiates the design_1 block design.
In the Flow Navigator, click the Generate Block Design command under the IP INTEGRATOR header. In the resulting dialog box, keep the Out of context per IP option selected, then click Generate.
This action creates synthesizable output products for each IP in design_1, building out-of-context synthesis runs for each IP. Under the Design Runs window, you will see the list of synthesis runs for all the IP contained in design_1 (within the static_region hierarchy, so everything not included in the rp1 block container) have been created and are now running. The IP within the block container, for sources rp1rm1 and rp1rm2, have been created but are not running – this action will be requested later.
All DFX designs require a floorplan. Each RP requires a Pblock containing enough programmable resources to implement any RM that can be inserted in that partition. In this tutorial, these Pblock and RGB pin constraints for ZuBoard 1CG have been created for you.
In the Sources window, click the + to open the Add Sources dialog box. Select Add or create constraints then Next. Click Add Files and navigate to dfx_abstract_shell/constraints directory to find pblocks.xdc and rgb1.xdc, then click Finish to add this constraint file to the project.
If desired, you can open the post-synthesis design to view the Pblock created for this design. Editing/optimizing the PBlock is out of scope of this tutorial and for more information on floor-planning requirements and methodology recommendations for Ultrascale+ devices, see the Vivado Design Suite User Guide: Dynamic Function eXchange (UG909).
The Dynamic Function eXchange Wizard is used to define relationships between the different parts of a DFX design. Using block design containers, you have created a level of the hierarchy of a design that can have more than one source in a DFX design. The block design container represents a RP and each source is a RM.
Within the DFX Wizard, you will define configurations and configuration runs. A configuration is a full design image, with one RM per RP. A configuration run is a pass of the place and route tools to create a routed checkpoint for that configuration. The DFX Wizard also establishes parent-child relationships between configuration runs, helping automate required parts of the flow including static design locking and pr_verify, and sets up dependencies between runs, so Vivado knows what steps must be rerun when sources are modified.
For more information on the DFX project flow, see in the Vivado Design Suite User Guide: Dynamic Function eXchange (UG909).
Now lets go through the steps to create the DFX configurations or source 06_create_dfx_configs.tcl to automate the steps.
Open the DFX Wizard by clicking Dynamic Function eXchange Wizard in the Flow Navigator or by selecting that option under the Tools menu.
Click Next. In the Edit Reconfigurable Modules step, you will see the two RMs, rp1rm1_inst_0 and rp1rm2_inst_0, that you created within the rp1 block design container.
Click Next. In the Edit Configurations step, click the automatically create configurations link to generate two configurations. While you can also click the + button to generate these configurations, for designs with a single RP, automatic creation is the easiest way to create the list of configurations covering all RMs.
On the Edit Configuration Runs page, manually create the configuration runs to explore the options for the two flows, namely, Standard and Abstract Shells. Abstract shell has two fundamental advantages over standard full-static checkpoints:
Compile time for new Reconfigurable Modules is reduced for child runs, as Vivado implementation tools do not need to load or consider much of the information contained in the static part of the design.
Static design information, including licensed IP, is hidden from view in an Abstract Shell, enhancing design security and reducing IP license requirements. This benefit is currently only viable for AMD UltraScale+ non-project flow users.
Project mode for Abstract Shells leverages the first benefit but not the second. The entire design is always resident in a DFX project so there is no mechanism to hide any details about the static part of the design.
Note here that there are unsupported Features when using Abstract shells in DFX:
Abstract Shell does not support UltraScale or 7 series AMD devices
The current solution is set up for single-project environments. There is no support for exporting an Abstract Shell checkpoint to spawn a new “child project” for users to compile only Reconfigurable Modules. To share an Abstract Shell as a starting point for a secondary user, the flow must switch to a non-project Tcl scripted approach.
Selecting Standard DFX creates one run per configuration as declared on the prior page. The first configuration on the list is the parent run and the remaining ones are child runs of that parent. In this simple example, impl_1 which is the default run that is always created is used as the Standard mode parent run.
Parent Run using Standard DFX
Run: impl_1
Parent: synth_1
DFX Mode: STANDARD
Configuration: config_1 (rm1)
First Standard Child Run
Run: impl_std_child_1
Parent: impl_std
Configuration: config_2 (rm2)
Selecting Abstract Shell creates one parent run, then one child run per Reconfigurable Module. In this design that also totals two runs, but for designs with multiple RPs, the number would be higher.
Parent Run using Abstract Shell
Run: impl_abs
Parent: synth_1
DFX Mode: ABSTRACT SHELL
Configuration: config_1
Uncheck the Auto Create Child Runs option
Abstract Child Run
Run: impl_abs_child_1
Parent: impl_abs
DFX Mode: ABSTACT_SHELL
RM Instance: design_1_i/rp1:rp1_rm2_inst_0
With all design sources now added to the project, and all settings complete for a DFX design, it is time to implement the design.
This step is automated using source 07_run_impl_1.tcl script. Note that the script only implements the impl_1 and child_1_impl_1 configurations inorder to make implementation faster and move to the next steps for non-project Tcl script based Abstract shell flow.
In the Vivado UI Design Runs window, shift-click to select all the implementation runs. Right-click in this area and select Launch Runs. This action will kick off runs in the necessary order. Each parent run will run in parallel, as they do not depend on each other. Then after each parent run completes, each child below the parent runs will run in parallel.
For child implementation it happens within the context of a locked static design (standard or abstract) derived from the parent implementation.
When comparing the results, this is what you will see:
Parent run results for both impl_1 and impl_abs will have identical results (timing score, critical warnings, etc.). These runs use the same sources and options so the resulting placed and routed checkpoints will be identical.
Compile time for the Abstract Shell parent run (impl_abs) will be slightly longer overall than the Standard DFX parent run (impl_1). The divergence point for these flows is after the full routed design checkpoint is written.
The Standard DFX run carves out the RP (using update_design -black_box) then locks the remaining static (using lock_design -level routing) to create a static-only checkpoint, design_1_wrapper_routed_bb.dcp.
The Abstract Shell run embeds these two steps in write_abstract_shell, and additional carving and associated checks are done to remove the bulk of the static design. This process takes longer, and results in abstract shell file abs_shell_design_1_i_rp1.dcp.
Compare the file sizes of design_1_wrapper_routed_bb.dcp and abs_shell_design_1_i_rp1.dcp. The full shell will be more than 3x the size of the abstract shell for this design.
Compile time for the child runs will be longer when using the Standard DFX flow than for the Abstract Shell flow. This is where the Abstract Shell flow provides benefit. These runs have smaller checkpoints to open and less information to process so the overall compile times will be reduced. This design being a simple one, the compile time gains are quite modest.
The final Design Runs window will show numbers that look like this:
On this machine, the Abstract Shell child runs took lesser time than the Standard DFX child run. For larger designs, especially those with multiple and/or relatively small RPs, the savings are even more dramatic.
At this stage, you can generate bitstreams to by right clicking the design runs and clicking Generate Bitstream.
The final step in the hardware flow is to export the platform for Vitis baremetal application or PetaLinux. This fixed Xilinx Support Archive (XSA) platform contains the full device bitstream, the hardware handoff, and other files needed to build an application with it.
Open the impl_1 design run by right-clicking on that run and selecting Open Run.
Select File > Export > Export Hardware.
Select the Include bitstream option and click Next.
Name the XSA file accordingly, then change the Export to the directory to the required folder beside the current repository directory. Click Next, then Finish to write the XSA.
Similarly, open the child_1_impl_1 design run by right-clicking on that run and selecting Open Run.
Repeat the above steps but change the XSA file name (eg. full_static_rm2.xsa) accordingly.
Repeat the above steps to generate XSAs for other design runs.
Note: In this release of Vivado, there is only possibility to export XSA with full device bitstreams and there is no possibility to include partial bitstreams.
The above steps for exporting the XSA as well as the partial bitstreams of the RMs is available in the 08_export_xsa_bit.tcl script.
As mentioned in Step 7, to tap the full benefits of Abstract Shells especially with respect to enhancing design security, we need to go the non-project mode using Tcl scripts. Moreover, we are interested to explore the possibilities of how to hand over the abstract shell to another team solely developing new reconfigurable modules. The next steps focus on how to generate Abstract shell using Tcl script. This is automated using 08_write_abstract_shell_rp1.tcl.
This script opens the Standard implemented design run (impl_1) and calls the write_abstract_shell Tcl command to generate the abstract shell checkpoint for RP1 and writes it to checkpoints/abs_rp1_routed.dcp. This Abstract Shell contains only a minimal logical and physical database necessary to implement a new RM within this specific RP1 to validate timing and pass PR Verify, and then generate a partial bitstream for that RM.
Each call to write_abstract_shell first creates a copy of the full design checkpoint in memory, then runs the following steps automatically:
Carves out the target RP1 (using update_design -black_box)
Locks the remaining design (including any other RMs)
Writes the Abstract Shell for the target RP
Runs pr_verify for this checkpoint compared to the original fully routed design
Examine the files created for the parent configuration. Within Windows Explorer or in a shell console, navigate to the dfx_abstract_shell/projects/prj_dfx_axi_gpio_rgb_zu/prj_dfx_axi_gpio_rgb_zu.runs/impl_1 subdirectory. Examine the different design checkpoints and their sizes. Note that file sizes listed here may be slightly different depending on Vivado tool version, implementation run options and operating system. Key files include:
design_1_wrapper_routed.dcp (4,990KB): Full routed design with RM1 in RP1
design_1_wrapper_routed_bb.dcp (4,778KB): Static only design with locked placement and routing and black boxes for RP1
design_1_i_rp1_rp1rm1_inst_0_routed.dcp (358 KB): Routed module-level checkpoint for the RM1 instance
It is no surprise the RM1 checkpoint is much smaller than the static design checkpoints given their size and complexity in this design.
Examine the sizes of the Abstract Shells, comparing them to the size of the top_routed_bb.dcp full shell checkpoint. Sizes may vary, but for Vivado 2025.1 in Windows, the Abstract shell for RP1 (abs_rp1.dcp) is 1,094 KB.
Note how much of the static design is no longer present in the Abstract Shell. In this simple design, you could see that parts of the static design has been removed, however parts do remain, including elements from the DFX Controller and DFX Decoupler design, as they have connectivity to RP1.
Full routed design with RM1 in RP1
Routed static only design with black box for RP1
Routed RP1 Abstract Shell checkpoint
If the development of a new RM is to be outsourced to another team, the main reference files to hand over for RM development are:
Reference RM block diagram: dfx_abstract_shell/projects/prj_dfx_axi_gpio_rgb_zu/prj_dfx_axi_gpio_rgb_zu.srcs/sources_1/bd/rp1rm1/rp1rm1.bd
RP interface: dfx_abstract_shell/projects/prj_dfx_axi_gpio_rgb_zu/prj_dfx_axi_gpio_rgb_zu.gen/sources_1/bd/design_1/bd/rp1rm1_inst_0/rp1rm1_inst_0_stub.vhdl
This is automated by the 10_export_rm_handoff.tcl script and the handoff files are placed under dfx_abstract_shell/artifacts/handoff_rm.
At this point, new RMs can be implemented within the written Abstract Shell. Each RM can be implemented in parallel in separate Vivado sessions if desired, as each RP can be managed independently. This being a simple project there is only one Abstract shell for RM1, however, in a larger project with multiple RPs, this can be done for all RPs in the design. Unlike within the project flow where the focus is on full design configurations, the focus for the Abstract Shell approach is on the RMs. Note: The port list for each RM for a given RP must be identical, even if not all of the ports are used by each RM.
Navigate to dfx_abstract_shell/tcl/vivado/reconfig_modules where the scripts for this section are present.
We follow the exact same steps as followed for the previous RMs to create the RM3 block design but in a standalone block design project. You can refer to the reference block diagram and the RP1 stub interface exported in Step 10 in the previous section to create the block diagram. This step is automated using 01_create_rm3_bd_prj.tcl or follow the below steps in Vivado UI to create RM3.
Open a new Vivado session and create a new block design project and name the design rp1rm3.
Add the IPs: AXI GPIO IP, Inline Constant IP and configure them as below. Make sure to configure the values different from RM1 and RM2 so that it is evidently distinguishable when partially reconfiguring the RMs at runtime on the board. In RM3, the GPIO output to RGB is set to 0x4 (blue) and the constant value is 0xDEADBEEF.
RM3 AXI GPIO Configuration
RM3 Inline Constant Configuration
RM3 address configuration
Right click on each of the ports (S_AXI, s_axi_aclk, s_axi_aresetn and gpio2_io_o) and click Create Ports. Make sure to name them exactly as below to match the RP stub interface as these ports will be the ones exposed in the top level wrapper and should match exactly the ports in the parent RP interface.
Finally, in the Sources window, right-click on rp1rm3.bd and select Create HDL Wrapper. Keep the Let Vivado Manage option selected and click OK. In the Sources window, rp1rm3_wrapper.vhd has been created and added to the project. This HDL file instantiates the rp1rm3 block design as shown below.
This step is done using the 02_run_synth_rm3.tcl script. In the Tcl console run source 02_run_synth_rm3.tcl to synthesize and generate the post-synthesis netlist checkpoint for the RM3 module.
synth_design -mode out_of_context -top rp1rm3_wrapper -part $PART_NO
write_checkpoint -force $CHECKPOINTS_DIR/rp1rm3.dcp
This step is done using the 03_generate_rm_partial_bit.tcl script. This script does the following steps:
Starts a new Vivado session to work independent of the previous project.
Loads the RP1 Abstract Shell checkpoint which is the one generated after place and route implementation of the parent design.
add_files $abs_dcp
Loads the RM3 post-synthesis netlist checkpoint generated in the previous step.
add_files $rm_dcp
Links the routed RP1 Abstract shell checkpoint with the synthesized RM3 checkpoint.
set_property SCOPED_TO_CELLS {design_1_i/rp1} [get_files rp1rm3.dcp]
link_design -mode default -reconfig_partitions {design_1_i/rp1} -part $PART_NO -top design_1_wrapper
Implement the design normally, then when it is time to save the routed design, save the complete current image (Abstract Shell plus Reconfigurable Module).
opt_design
place_design
route_design
write_checkpoint -force $CHECKPOINTS_DIR/abs_rp1_rm3_routed.dcp
Note: Before considering partial bitstream generation, PR Verify should always be done. PR Verify compares multiple design images where RMs differ but static is the same to ensure all DFX rules have been followed. If full configuration assembly is done, then PR Verify can be run in the standard way, comparing the entire static design for each checkpoint. However, PR Verify can also be done in the Abstract Shell context, comparing the initial Abstract Shell to the shell with the routed Reconfigurable Module. Using the checkpoints created above, this verification check can be done if no checkpoints are currently open:
pr_verify $CHECKPOINTS_DIR/abs_rp1_routed.dcp $CHECKPOINTS_DIR/abs_rp1_rm3_routed.dcp
DCP1: C:/workspace/dfx_abstract_shell/artifacts/checkpoints/abs_rp1_routed.dcp
Number of reconfigurable modules compared = 1
Number of partition pins compared = 105
Number of static tiles compared = 11583
Number of static sites compared = 106
Number of static cells compared = 596
Number of static routed nodes compared = 17423
Number of static routed pips compared = 16646
DCP2: C:/workspace/dfx_abstract_shell/artifacts/checkpoints/abs_rp1_rm3.dcp
Number of reconfigurable modules compared = 1
Number of partition pins compared = 105
Number of static tiles compared = 11583
Number of static sites compared = 106
Number of static cells compared = 596
Number of static routed nodes compared = 17423
Number of static routed pips compared = 16646
INFO: [Constraints 18-13353] HDPRVerify-42: All DCPS provided to pr_verify have locked static.
INFO: [Vivado 12-3253] PR_VERIFY: check points C:/workspace/dfx_abstract_shell/artifacts/checkpoints/abs_rp1.dcp and C:/workspace/dfx_abstract_shell/artifacts/checkpoints/abs_rp1_rm3.dcp are compatible
pr_verify: Time (s): cpu = 00:00:16 ; elapsed = 00:00:14 . Memory (MB): peak = 4237.246 ; gain = 56.473
Each Abstract Shell contains all the information needed not only to implement any RM for that RP, but to create the partial bitstream for that function. It is important to note that partial bitstream generation for RMs should be done from the design checkpoint that includes both the RM and the Abstract Shell in which it was implemented. The Abstract Shell contains critical information about the static design that must be included in a partial bitstream.
open_checkpoint "$CHECKPOINTS_DIR/abs_rm3_routed.dcp"
write_bitstream -force -cell {design_1_i/rp1} $BITSTREAMS_DIR/rm3.bit
In this section, we will see how to consume the hardware generated in the previous steps in Vivado to build a baremetal application in Vitis. The goal is to run the application in the ZuBoard and perform runtime partial reconfiguration using the partial bitstreams generated in the previous steps using the Standard DFX mode (rm2.bit) and using Abstract shell DFX Mode (rm3.bit) and observe the outputs/LED on the board.
Open a Vitis 2025.1 workspace and generate new platform by clicking File > New Component > Platform and follow the below steps:
Choose the libraries in the board support package for the platform as shown below to enable reading from SD card and configuring the PL over PCAP from PS.
Click build in the Flow tab to start the Platform build
Once the build is complete, you should see this message in the output console: Platform Build Finished successfully.
Click the Examples button (shown in red) on the left pane and choose Hello World program
Click Create Application Component from Template
Choose the platform we created before as shown below.
Keep the default Standalone_psu_coretexa53_0 domain inside the platform and clich Next and Finish.
Now the Hello World application will be created and it should be available in the Vitis Explorer as below.
Now replace the contents of the helloworld.c with the contents from the provided source in the repository at dfx_abstract_shell/src/vitis/helloworld.c.
The updated program does the following steps:
Reads the Static and RM1 GPIO registers
Mounts SD card to read the partial bitstreams
Partially reconfigures RM2 bitstream generated by Standard DFX flow and reads RM2 GPIO registers to confirm reconfiguration
Finally, Partially reconfigures RM3 bitstream generated by the Abstract Shell DFX flow and reads RM3 GPIO registers to confirm reconfiguration
Increase the Heap Size in the Linker script from 0x2000 to 0x200000 to accommodate the partial bitstreams during dynamic meomory allocation while running the application.
Now we need to copy the partial bitstreams to the SD card so that when we run the application on the Zynq MPSoC, it can read the bitstreams and partially reconfigure when asked to.
For the ZUBoard to recognize the drive, ensure your SD card is formatted as FAT32.
The partial RM2 and RM3 bitstreams should be available at dfx_abstract_shell/artifacts/bitstreams
Set the boot switches on the ZUBoard as below to boot from JTAG. All the SW2 [1-4] switches should be ON.
Connect the USB C Power and USB UART cable to you computer and power on the board.
Finally, you can run the hello_world application by clicking Run in Vitis and monitor the output produced in the serial monitor as below. Note that as the RMs (1, 2, 3) are partially reconfigured, the RGB LED1 in the ZuBoard changes from Red, Green and then to Blue showing DFX in action.
In this tutorial, we walked through the complete DFX development flow of Vivado DFX hardware generation to Vitis application development for DFX covering both Standard DFX and Abstract Shell flows. This should aid you to develop more complex DFX designs and outsource reconfigurable module development to different teams using Abstract Shells enabling faster implementation times as well as IP protection and modular design. Thats it folks!