LLM-Drone: Aerial Additive Manufacturing with Drones Planned Using Large Language Models
Chad Merrill*¹ Akshay Raman*¹ Abraham George¹ Amir Barati Farimani¹
*Equal Contribution
¹Carnegie Mellon University Mechanical Engineering
In this work, we introduce LLM-Drone, an additive manufacturing system that utilizes an aerial drone equipped with an LLM agent to construct objects using modular building blocks.
Additive manufacturing (AM) has transformed the production landscape by enabling the precision creation of complex geometries. However, AM faces limitations when applied to challenging environments, such as elevated surfaces and remote locations. Aerial additive manufacturing, facilitated by drones, presents a solution to these challenges by allowing construction in previously inaccessible areas. However, despite advances in methods for the planning, control, and localization of drones, the accuracy of these methods is insufficient to run traditional feedforward extrusion-based additive manufacturing processes (such as Fused Deposition Manufacturing). Recently, the emergence of LLMs has revolutionized various fields by introducing advanced semantic reasoning and real-time planning capabilities. This paper proposes the integration of LLMs with aerial additive manufacturing to assist with the planning and execution of construction tasks, granting greater flexibility and enabling a feed-back based design and construction system. Using the semantic understanding and adaptability of LLMs, we can overcome the limitations of drone based systems by dynamically generating and adapting building plans on site, ensuring efficient and accurate construction even in constrained environments. We propose a novel methodology that leverages LLMs to design a construction plan for drone-based manufacturing and adjust the plan in real time to address any errors that may occur. Our system is able to design and build structures given only a semantic prompt and has shown success in understanding the spatial environment despite tight planning constraints. Our method's feedback system enables replanning using the LLM if the manufacturing process encounters unforeseen errors, without requiring complicated heuristics or evaluation functions. Combining the semantic planning with automatic error correction, our system achieved a 90\% build accuracy, converting simple text prompts to build structures.
(a) The main modules required for the additive manufacturing process. (b) A prompt is created from a default prompt template that includes current scene info and a design request. The LLM processes the prompt and outputs the coordinates needed to achieve the design. (c) The vision module aligns the Crazyflie coordinates with the LLM output coordinates using the Crazyflie lighthouse. (d) The drone places a block and the vision model verifies the placement. If incorrect, the current scene is passed back to (b) for remprompt and a new set of coordinates is generated to finish the design. (e) The Crazyflie drone transporting a building block from pickup to drop off.
To explore areal additive manufacturing, we use the Bitcraze Crazyflie 2.1 nanoquadcopter, a small and inexpensive quadcopter designed for research and prototyping. To control the drone, we use Crazyflie’s Motion Commander module in the provided Python API, which allows for precise control of the drone’s position and movement. Communication with the drone is handled via the CrazyRadio module, while the integrated Lighthouse positioning system ensures accurate tracking in three-dimensional space.
The Crazyflie drone assembles structures by placing small, interlocking building blocks. These blocks feature a magnetized interlocking mechanism that facilitates easy alignment and secure attachment. The blocks are designed with a hump at the top, serving as an auto-localizing feature to guide the alignment when stacking blocks. To pick up the blocks, the drone uses a metal wire, suspended below the drone, to attach to the top magnet of the block. Once the block is locked, the drone can carry and position it at the desired location. A stronger magnet, integrated into the drop-off system, attracts the block more forcefully, allowing the drone to detach from the placed block. A diagram of the block and connection design, along with examples of the drone placing the blocks, is shown in the figure below:
(a) Model of Crazyflie pickup apparatus. A rigid tube keeps the z-length of ferrous wire constant over numerous pickup/dropoff attempts. The wire has the ability to move toward magnetic attraction from the ‘pickup magnet’ on the block.
(b) Timelapse of the pickup and dropoff procedure. The stronger dropoff magnet allows the drone to detach from the building block once it is placed. without excessive movement is challenging. Magnetic interconnection ensures that the blocks can connect to each other with ease, even if the positioning of the drone is not perfectly precise
Much like a slicer in traditional 3D printing, our system uses an to LLM translate high-level design goals into structured, executable plans for our drone-based manufacturing system. With its advanced reasoning and creative capabilities, the LLM can dynamically generate and adapt designs based on the current build state and user-defined requests. By combining predefined geometries with LLM-based planning, our proposed approach not only enhances construction accuracy but also enables a flexible and scalable framework for drone-assisted manufacturing across diverse environments.
To generate the construction plan, we provide the LLM with a description of the goal design (provided by the user), an occupancy grid of the scene to show where blocks have already been placed, and a set of rules specifying the LLM's task and defining the desired output schema. The LLM responds with a JSON-formatted plan, which lists the coordinates of the next blocks to be placed. In order to adapt to errors in the building process, we can re-query the LLM after each block placement to update the plan.
The prompt is broken into 5 parts: Design request, JSON Schema, Rules, Current Scene, and Task. The Task, Rules, and JSON Schema are predefined and do not change. The Design Request is input by the user at the start of the build process. The Current Scene is captured and presented each time a prompt is called.
To compare the robustness and overall feasibility of the build environment, as well as the LLM's capabilities, our system was evaluated with two sets of experiments: the first examined the LLM’s creativity and robustness to errors during manufacturing, and the second examined the system’s ability to take in commands from the LLM and convert them into correct actions and provide valid feedback to remedy any errors. Essentially, we design a virtual test scenario that validates the LLM performance and another physical scenario to validate hardware cooperation and the ability to utilize LLM actions. For both of these experiments, the system manufactures the designs additively, placing blocks one at a time to iteratively build a structure (either virtually or physically). Similarly, there is no mechanism for the system to remove erroneously placed blocks - if such blocks are placed, they must be incorporated into the design.
The first test was a quantitative assessment of LLM designs on a 10x10 grid. For this test, we created fifteen “constrained prompts" - prompts in which only one answer can be considered correct. The purpose of these constrained prompts was to restrict the output of the LLMs for precise accuracy measurement. We evaluated their performance using an Intersection over Union (IoU) metric, comparing the generated designs with manually defined correct answers. Each LLM was tested five times per prompt, and an average IoU was calculated across these responses.
The second test was a qualitative comparison conducted on a smaller 5x5 grid. This test focused on open-ended design requests for simple geometric shapes, such as “star", “trapezoid", and “right triangle". We assessed the feasibility and recognizability of the generated designs for each request. Feasibility was determined by whether the LLM followed the given guidelines, including staying within the 5×5 grid, using only integer values for coordinates, and responding in the correct JSON format, all of which were specified in the prompt. Recognizability was judged based on whether a person could correctly identify the intended shape without prior knowledge of the design request. Human evaluators graded each design based on these criteria, using a three-point scale: 1 indicated the design was both feasible and recognizable, 2 indicated it met only one of these criteria, and 3 indicated it met neither.
Performance of LLMs in the quantitative test (a and b) and qualitative test (c).
We tested the LLM-Drone pipeline with the Crazyflie ecosystem on a 5x5 grid build space. We executed the pipeline for several design requests, including a smiley face, cross, diamond, square, the letter L, and “two columns on the left and bottom right corner only." In order to evaluate the effectiveness of reprompting the LLM on placement errors by the drone, we execute runs for each of these designs both with and without reprompting enabled.
a) Outlines a step-by-step design process with reprompting enabled for the design of a Smiley Face.
b) Provides an overview of 6 designs, both with and without reprompting.
The integration of Large Language Models (LLMs) with aerial additive manufacturing represents a transformative step forward in the field of construction and logistics, offering a dynamic solution to the challenges posed by inaccessible terrains and complex construction tasks. Through our research, we demonstrate the capabilities of LLM-driven drones to achieve high levels of precision and adaptability in real-world settings. Our findings indicate that the LLM Planner can manage up to 90% build accuracy, enhancing both the planning and execution phases of aerial manufacturing tasks.
Overall, our results demonstrate that by leveraging the combination of block-based drone additive manufacturing with Large Language models, our proposed method enables the design and manufacturing of complex objects, specified via textual description, in a manner that is robust to error. Although our experiments suggest that this methodology is a promising solution to many of the challenges facing in-the-field additive manufacturing, a major limitation of this work is that our experiments were conducted in a laboratory environment. Future work will have to extend the general proof of concept we present in this work to address application-specific challenges that will arise when deploying such a system in the field. Additionally, future work can explore using LLM reasoning to construct multilayer structures, allowing complex 3D objects to be built. Finally, on a mechanical level, future work can investigate enhancements such as larger, more capable drones and the incorporation of magnets that can be turned on and off. These changes would further optimize the performance and flexibility of the design, allowing for more precise control and a wider range of applications.
@article{raman2025llm,
title={LLM-Drone: Aerial Additive Manufacturing with Drones Planned Using Large Language Models},
author={Raman, Akshay and Merrill, Chad and George, Abraham and Farimani, Amir Barati},
journal={arXiv preprint arXiv:2503.17566},
year={2025}
}