Students had one week to redesign a toy they had completed during a previous course. Starting from a sketch, photo of a prototype, a project description, or a combination of these, students then expressed their redesign intent before initiating a "collaboration" with one or more "AI" platforms, building on knowledge gained during their Mockt[AI]ls assignment.
Project presented here include an description of their original design concept, a statement of intent for their redesign, a description of their redesign process and reflection on their design experience. Images are captioned with prompts used.
Tools used for this assigment:
Midjourney via https://www.midjourney.com/
Vizcom via https://www.vizcom.ai/
During student presentations of their work, emergent discussion themes included:
the personification of AI; "it doesn't understand..."
collaboration with an AI; roles and responsibilities of a collaborator
the importance of understanding communication preferences and barriers, including modality (text, natural language, visuals, body language)
the importance of "intent" in a design or redesign effort
Designed by: Nicole Li
Original Concept Description: A box with a pulling track that creates sensory input for children, specifically hearing and seeing.
Intent of Redesign: For my redesign, I wanted to explore what else can be created with a simple box with a pull track toy. I was curious to see what other forms or routes I could have taken with my toy.
Process: I first started off with a simple prompt such as “a sensory pull toy for children” and worked my way into something more specific like animals and ships. Pretty early on, I noticed how the results were pretty much all the same and there wasn’t much change to my original sketched design. I changed the setting from “render” to “refine” and the “drawing influence” from 50% to 30%. I then typed in prompts that I thought would be strange yet interesting to see combined with my design.
Reflection: It was interesting to see what ai could do with such a simple design that I fed it. Sometimes the image generated was too literal with my prompt but other times it followed too closely to the form of my original design. I found the ones that blended both in a way that used my design as a base for a scenario from the prompt the most intriguing, but finding that perfect blend was a bit tough. Even when I do find that one image, there are still some things that I don’t quite agree with. I think using ai as a tool to generate ideas is useful, but not as a tool to create a final design.
Designed by: Mica Bendezú
Original Concept Description: Color sorting, pull-apart dandelion with magnetic seeds. Designed with the play tendencies of autistic children in mind.
Intent of Redesign: To re-imagine my toy at a larger scale, as an installment/structure for play.
Process: I started out by editing and adjusting one of my final sketches of my concept. I was having difficulty generating something more ‘normal,’ or in line with something familiar like a playground, and I couldn’t think of what latent text to provide the programs with. So, I tried the “/describe” command in Midjourney to see how the program was interpreting my sketch. Each of the descriptions it provided were fairly accurate to my own interpretation, but didn’t pick up on anything more than the visual elements (which makes sense). I then had the A.I. generate images based on the descriptions as prompts. The results were intriguing, and I decided to run with them.
Reflection: I realized it would take an inordinate amount of time and effort to figure out how to get the A.I. programs to replicate what my concept was. My concept, I think, is too different from any of the data these programs were trained on, and the elements (physical and conceptual) that make it work are hidden from view. With this in mind, I moved forward trying to efficiently use the programs without fighting with them to get them to work in the way I might want them to.
Designed by: Cynthia Szeto
Original Concept Description: A fun game where children can work on their hand eye coordination. Once a child pulls the tail, the tongue would shoot where they can use the velcro tip to latch onto/catch different kinds of bugs.
Intent of Redesign: In this redesign I wanted to play with scale as well as the animal it's based off of. my original design ended up looking less like a chameleon and more of a dinosaur so I wanted to play into that more and see what it would look like if it went full dinosaur.
Process: I started out, much like the last project, with Midjourney. I used that for a while before switching and testing out Vizcom to see if I could get a better result. Eventually, I went back to Midjourney to make something completely new which was then moved to photoshop to change small details and expand the image.
Reflection: I originally wanted the renders to still look like the sketches and then modify as I go. It was made clear fairly quickly that it was really hard to get AI to make it exactly like the sketch. This is where I switched to Vizcom to see if I could get the renders closer to the sketches. Here, the renders stayed true to the sketched but now I couldn't really get it to change as much as I liked. I found I really had to word things differently and mess with different options to get change. In the end I went back to Midjourney and I went all in on changing the toy almost entirely. When I managed to let go of the sketches I was able to play with the wording more and get better results. From here the photoshoped added details was how I brought in more or the original design and finish he redesign.
Designed by Katrina Boyce
Original Concept Description: The original toy is a log-inspired roller pull toy with holes that have fun bugs inside. The body of the log unfurls to reveal a play mat with illustrations of the bugs.
Intent of Redesign: I intend to see how AI can further my design by enhancing my final sketch. I'm curious how AI will render my sketch and/or how it could expand on my design.
Process: I used Vizcom to create variations of my final toy sketch. I played with variations on the wording, although I kept it pretty similar for all generations. I also experimented with Vizcom's style and drawing influence features. Once I got an output I liked, I used Photoshop to remove all of the pieces that didn't make sense and fix imperfections.
Reflection: I enjoyed this process and found the results interesting. I noticed through the process that the generations were either very similar to my sketch (drawing influence set to 50% or above) or looked like nothing from my sketch was used and the result didn't fully make sense (30% and below). Also, I didn't notice a big difference when I changed the style of the render. The range and variation seemed somewhat limited but this may also be because I am still learning the best prompts and settings with this particular AI. I feel this could be a helpful tool to enhance sketches and concepts, although I would likely still need to edit what AI produces so it makes total sense and matches my vision.
Designed by: Aliina Lange
Original Concept Description: Glow-in-the-dark bath toy that sticks to the wall. Allows the child to squish and pull on the toy while it glows and floats around in the bath or stick to the side of the tub. A somewhat stationary toy that allows the child to interpret how to use it for themselves. I wanted the user to avoid technology and noise, and just enjoy bath time.
Intent of Redesign: Went to the complete other side of the spectrum. Made from hard plastic and remote controlled, the octopus would move around on the floor and have moving joints. Rather than encourage a connection between toy and child, I focused on what a remote control could do for the child. It is more of a cause-effect toy than something with sentimental, sensory value.
Process: I knew I didn't want to stray too far from my original design, as I felt proud of the simple form. I wanted to just flip its materiality over to the opposite axis. Using Vizcom, I switched from "render" to "refine" and continuously adjusted how much influence my drawing had on the outcome. I couldn't get rid of my contour lines, so I ended up embracing them, and making it a hard plastic form inspired by cyborg armor.
Reflection: With this assignment, I was glad to see something that looked like I had created it. Since I made a product that had the same form, I think the A.I. render registers as having gravity in the real world, because I've physically held an object that looks just like it. Revisiting an old design was also fun for me. When I was mind-mapping for the toy project, I went down the path of "The Information Age" and what that meant to me. I had originally chosen to do a toy that voided technology of its value: it was was meant to be submerged in water. Now, I had the chance to go down the other path and quickly render what a "high-tech" version could have looked like.
Designed by: Sophie Chu
Original Concept Description: The Peekaboo Drawer is a fun, playful piece of furniture that grows with your child. The asymmetrical shape gives the drawers a personality while the handles are abstracted faces that are friendly and inviting. When the children are just starting to walk, the drawers on the bottom are meant to aid them in standing up and balancing. The small peekaboo toys inside the drawers are a fun surprise that recreates the familiar experience of playing peekaboo with their parents. As the children get older, the drawers also age with them. The taller the drawer, the more complex the opening system. The handles have a variety of shapes and sizes that help improve toddler dexterity, motor control, and problem-solving skills. The toys inside can be modified for matching games, educational learning, or just emotional attachment through a universal snap button. The playful nature of the drawer is meant to evoke a sense of wonder, curiosity, and magic while also developing children’s physical functions.
Intent of Redesign: I wanted to see if Midjourney could recolor something drawn while still keeping the same form. I also wanted to see if it could make the dressers more magical and whimsical.
Process: I used my final concept poster as the original. I asked AI to make the handles more complicated and more enticing. Midjourney was very confused by my sketch of the drawer. I had to fight it to make it want to create something that didn't resemble a stereotypical dresser. I found 2 images that were the closest to what I was looking for so I blended them to create the final. I then used Photoshop generative fill to add a window and some toys on the ground. The sunlight was painted using Procreate.
Reflection: Midjourney had a very hard time letting go of what a stereotypical dresser looked like. It did not want to keep the organic shapes, weird handles, or differently-sized drawers. The children also came out looking a little uncanny-valley. Many of them had bad limb placements, horrible proportions, and no realism. AI has trouble letting go of what it previously knows and even when I told it not to change anything about the drawer, it still did.
Designed by: Cheyann Clingerman
Original Concept Description: My final product is a water and oil stim toy with a target audience of neurodivergent, specifically autistic, children. The shape is an abstraction of a submarine, designed to be fun to hold and interact with. The PLA printed shell offers stability and durability to the product and its texture was given extra care to be inviting to those who are sensitive to textures. The colors mimic the depth of water with the outer shell being a bright teal and the water itself colored a deep blue like the ocean. The stripe of gold calls to sand. The affordances of the windows gives space to interact with the water and oil in very satisfying ways. It also creates a distinct glub glub sound. The shape allows for many play scenarios. Whether this toy is a larger fixture to be observed in a child's room, or a smaller fidget toy to be taken on the go, it will provide comfort and a unique sensory experience.
Intent of Redesign: I wish I had more time to experiment with scale in the original project. So now, I set out to up-scale my concept to the size of a museum exhibit that multiple children could experience at once.
Process: First I was curious if Dall-E could make anything similar to my idea with only text prompts. AI really isn't a good tool for out of the box ideas. It struggled to get anything close to my form, materiality, or water and oil inside. I moved to Vizcom and used a shot of my 3d model that I wanted the program to scale and place in a museum setting. I ran into more trouble here then I was expecting. It didn't know what to make of my model and I was lacking the language to communicate with the AI too. This wasn't like a shoe, where I could tell it to, "make the sole of the shoe bigger" and it could understand the structure of the shoe and where the sole would be. Eventually I got something close enough to my idea and did heavy editing over it.
Reflection: The more I painted over the AI generated image, the more problems I started to see. It was a maddening process. I feel like the final raises more questions then it answers. I wouldn't use AI as an underlay for drawing anything that's made to communicate a purpose. The positives was that I enjoyed having a clear color pallet to pull form. I also wouldn't have made the structure hollow, where children could climb inside, which is an interesting concept the AI brought to me.
Designed by: Lea Sokol
Original Concept Description: For this project, I planned to design a pull toy for children from ages 2 to 4. After researching kids’ toys, I knew I wanted my toy to contribute to the learning and development of children through the use of bright colors, shapes, different textures, and cause and effect. I planned to create a pull toy that incorporates each of these things.
I intended to design a toy where when a carrot is pulled from the ground, a hidden bunny is revealed. I also planned to incorporate additional parts and textures to make my toy more dynamic and appealing to children. I added more removable root vegetables to the ground structure that can be taken out and put back into pockets of their respective colors and shapes. This adds a puzzle/game component to my toy that would spark curiosity and encourage logical thinking in addition to teaching them cause and effect.
Intent of Redesign: I want to reimagine my design using different materials and forms. Since my toy has a mostly felt finish and my form is mostly rounded shapes, I want to see what my toy would look like with a plastic or wooden finish with differently shaped parts. I also wanted to see how AI would interpret my drawing and the design of children's toys.
Process: To begin, I found a clear drawing of my toy design and inserted it into Midjourny with the caption "a Fisher Price toy." I wanted to see what Midjourney would generate, and while it did make toys, the designs were far from mine. I then wrote more in-depth descriptions and Midjourney generated images closer to what I sought. I tried a variety of different captions and got a variety of toys generated. I also played around with Vizcom, but overall liked the images generated from Midjourney better. I found that Vizcom had trouble producing toy-like renderings as it repeatedly produced "toys" with realistic-looking vegetables and realistic bunnies. After finding a rendering I liked, I took it to procreate to edit to my liking.
Reflection: It was hard for the AI to understand and interpret my mechanism. None of the images came out very close to my design, because of the materials and difficulty explaining the mechanism. I often put the phrase "Fisher-price toy" into the prompt and I think that caused plastic or wooden toys to be produced.
Designed by: Kora Lilly
Original Concept Description: Patch the Pig is an interactive toy that grows with your child physically and mentally. My main goal was to create and interactive piece that grows with the child as well as rethinks the classic felt board. Covered in a cream felt, this design allows for the child to personalize the pattern, color, and shape location of the other felt pieces. The additional felt piece's can be changed based on the learning curve of the child. From 1-2 the felt can be shapes and then 3-5 letters and numbers can be added. On top of the felt learning experience, the child is able to feed the pig with plastic balls that pull apart to alphabetical letters. These balls enter through the mouth, roll through the pig, and then are able to be located in the rear storage area of the toy. This toy also is designed to be advanced with wheels later on in the Child’s development.
Intent of Redesign: When writing my original concept, I did not consider how the verbiage would affect AI's renderings. In my redesign, I want to test how AI would take my original concept with my sketch and rethink the toy. Additionally, I intend to play around with prompt to image influence.
Process: For my process, I inserted the concept above into the prompt section and the original image into the uploaded section of Vizcom. I played around with how much influence the image had on the AI drawing by increasing each render's influence by a 10 percent until I reached 100 percent image influence. Once I picked my image, I edited the mistakes that were in the AI generation such as a messed up hand, a face with a weird tongue, and a joint leg in the back. To change the child's face, I used generative fill in photoshop with the prompt: make a child's face happy and looking down at the toy. To change the hand, I used the same method with the prompt: remove the middle finger that is sticking out. Lastly, to get rid of the weird leg, I used the brush tool, color selection, and heal tool to use expand the background.
Reflection: After using Vizcom to redesign my project, I found that the influence of the image largely impacted what the image would be. I found that influence set to 50 percent would resemble my form with slight changes, but when set to 40 percent or lower, it would completely diverge from the design. I also noticed that AI could not decipher what my object was. It knew that it needed it pig form (based on the concept), but could not imply how it should be used. Additionally, I found that because my prompt did not take into account key words, Vizcom would often pick and choose what words would influence the design the most. This made the designs with little image reference have a larger variance when i would regenerate them. It would often make the image look like a real pig with random fuzzy balls around them. Finally, I am surprised that my redesign looks more like how i thought it would look in my head than my final product. I do like my final product, but I feel the AI image gives the fuzzy felt look more then my final.
Designed by: Kelly Chen
Original Concept Description: The fabric choice mirrors an abstraction of elephant wrinkles, providing a tactile and visually engaging experience. The careful consideration of weight ensures a comforting feel, offering a sense of security while remaining easily portable and light. Pulling on the elephant's tail initiates the trunk to curl up. This interactive element invites children to engage their plush companions. After inducing the response of the trunk, one can calm the elephant with gentle strokes to straighten its trunk, fostering a nurturing interaction that encourages empathy and connection. Embrace Elephant goes beyond being a toy; it becomes a tool for emotional well-being. Through its interactive elements, it provides not just physical comfort but also for children to understand and respond to emotions.
Intent of Redesign: I want to explore and experiment with different materials on my soft good plush. In the initial stage of designing this elephant, I wanted to go in so many routes with different materials. I'm very curious to see what my elephant would look like if it was specifically made out of silicone, glass, or wood.
Process: I had a whole process of switching back and forth between Vizcom and Mid-journey. I begin the process with Mid-journey by linking an image of my prototype into a mini chat to use as a thread. I then would make a prompt based on the image and use that to generate the different material elephant. It took a while to figure out how to include the image in mid-journey. When I did, I already didn't like the suggestion Mid-journey made. I switched to Vizcom and got better results until it started to mutate my elephant, like adding two trunks, the further down I go. I gave mid-journey a second chance and the results were still not the ones I was looking for. I changed the seed into the sketch for my elephant because I think Vizcom had trouble generating certain areas from the lack of contrast of the fabric. I enjoy the look of Vizcom with my sketch and experiment a lot with the influence and prompts. After changing my prompt to finalize the redesign I was able to take it into Photoshop and edit minor changes.
Reflection: I was amazed with Vizcom and how well it rendered my sketch. Before this assignment, I thought AI wouldn't be able to get anywhere close to what you have in your head. Vizcon was very close other than the minor changes I needed to make in Photoshop. I still believe that AI is a good tool to use for ideation. Ai can't do everything for you because there are limitations like can't make out a certain part of an image and not being about to comprehend your prompt well. But it can still help you take on new ideas and perspectives on a design. Overall I think that Mid-journey is good at giving inspiration rather than helping you better visualize your design like Vizcom does.
Designed by: Chipper Stephen Orban
Original Concept Description: Using simple shapes and a combination of hard and soft materials, "Roll Buddy" offers an exciting spin on the traditional pull-toy. This design is intended for children 5-6 years old, and encourages fine motor skills such as tugging, running, and grabbing. The form is essentially one large ball bearing that can be pulled along using the rope arms. The toy's hands can snap together to create a closed loop for more play experiences. In addition to being physically engaged, I wanted to design something a child could get emotionally attached to. There are infinite ways the toy can be customized into unique, fun characters.
Intent of Redesign: To produce a visual overhaul stylized after an owl, and introduce more tactile experiences such as the flexible plastic tubing and various textures on the toy's body. Also to explore wooden forms that are nearly impossible to produce using available shop crafting techniques.
Process: I broke the project down into the three most interesting stages of iteration, mirroring the steps I took while crafting the original toy. I began with images prompted with wood and CNC terms in order to suggest the development of a rough face. Next I prompted midjourney with the URL of previous upscaled images but added "3D print" in the prompts. This nods to the development of a more refined prototype that moves beyond wood. Finally I used my favorite upscaled shot from the previous stage, and turned it into a hero shot using a combination Photoshop painting and generative fill techniques.
Reflection: My original toy project was bare and unfinished. Using Midjourney on this redesign project - in a strange way - made me feel like I was patching my ego. Seeing those first forms out of wood was exciting, and I imagined how differently my project would've ended up had I been able to craft with the precision Midjourney was suggesting. However the real rush (and to my suprise) came when I was editing the hero shot using Photoshop's generative fill and expansion tools. This was the first time I had used these tools before, and I was shocked out how accurate the generations were. I was quickly able to remove unwanted artifacts and replace them seamlessly. Although I did experience some trouble getting the generated items to be oriented in proper ways. I can definitely see how using AI as a supplemental tool can allow designers to move through the ideating stage much faster. However, it is critical to know that these generations do not always represent practical was to problem solve (such as the wooden faces in my first stage of iteration).
Designed by: Emily Tanchevski
Original Concept Description: This wooden noisemaker teaches cause-and-effect principles and aims to support the development of gross motor skills in infants and toddlers. Activation of the handle and attached axel rotates the gears, producing a wooden clattering sound as each tooth strikes the percussive structural body.
Intent of Redesign: I wanted to use this opportunity to spice up my original design--add more visual stimulus to the auditory-focused original concept, along with my branding, some color, and more complexity in the form. I was intent on only using bare, natural wood for the entire structure since a large part of my original concept was about the subtle differences in tone and pitch produced by different wood species. I appreciated the sophistication of the original toy design due to its materiality and wanted to carry that on while also tying in the aforementioned points of interest. The subtle use of primary colors and tiered quality of the gears and structural body alongside the toy's branding give the original design that extra bit of pizazz I was looking for.
Process: I began by giving the Midjourney bot key-word descriptions of the toy's original concept to see what that would produce. The images were fairly close to the mark, but still very vague. I then chose three process photos of the original toy in different stages (early ideation, refined sketching, and final model) to incorporate their visuals into the written prompts. I experimented with each and tried several combinations, but I found that using only the sample photo of the final prototype created images that most resembled what I had in mind. From there, I played around with different variations and brought my final image into Procreate where I added the wooden handle, some linework, shading, and branding.
Reflection: I was pleasantly surprised with how well the Midjourney chatbot handled my image prompts and was able to translate them into some pretty coherent images. I suspect that the fairly simple form of my original design made it easier for the A.I. to interpret the image prompts. I was also impressed with the level of complexity some images had which the A.I. added entirely on its own. I could definitely see myself (responsibly) using A.I. in this scenario to help me generate variations on the same prompt if I ever get stuck or need some inspiration. This time around using Midjourney, it felt like less of a wrestling match and more like I was describing the ideas bouncing around in my brain to someone who inherently understands what I'm trying to say. Of course, the modification of my chosen generated image through Procreate is what tied all of this together, further supporting the notion that the human intent behind all generated material is something that A.I. cannot replicate.
Designed by: Héloïse Richer
Original Concept Description: 3D model, Solidworks, theme Dune-Buggy Meccano. Old school military jeep x dune buggy.
Intent of Redesign: For my redesign, I want to make the car look more like a vintage meccano toy. To show how big it is, I' will add a kid into the scene for scale.
Process: I started my project by uploading my work to Midjourney, where I asked the AI to describe the picture. Then, I used this description to make a new image. At first, the image looked too much like a real jeep and not like a toy. So, I changed my instructions to make it clear I wanted a toy car. The AI didn't quite get the idea of a toy made from Meccano pieces at first, but after a few tries, I got a good result.
Next, I wanted to make the toy look more real by adding a picture of a child putting it together. It took a few attempts, but I finally got a very lifelike picture of a kid building the Meccano jeep/dune buggy. Then, I combined this picture with one from a toy catalog to make it even better.
After that, I used Vizcom to change the picture to fit my idea better. Finally, I put the picture into Photoshop, made some more changes with the generative AI, and added the Meccano logo.
Reflection: Looking back on this project that I've done in my first year has been really interesting. It was my first time doing 3D modeling with Solidworks. Using Midjourney to revisit this project again helped me delve further into the concept and determine the proportions.
If I did this project again, it would be interesting to add an instruction manual and show the parts laid out flat.
I believe AI could have been a useful tool in shaping my ideas and guiding my creative approach towards a specific style.
Designed by: Sydney Greenwell
Intent of Redesign: I wanted to see how AI would alter my original design in general. I wanted to see the different ways this could be marketed, using color schemes, and enhancing the overall structure.
Original Concept Description: A colorful pull toy with a band of abstract instrument rattles. The rattles can be played separately, or on the platform to act as a journey through sound, music, shapes, and colors, evolving with each stage of a child's development.
Process: I decided to purposely choose loose parameters for this assignment. I was interested to see how AI alters my original design. At first, I had a difficult time learning how Vizcom worked. It would use my original design more than mid-journey, but the materiality and shapes wouldn’t make much sense. A lot of the designs looked sharp and lacked color, not suitable for children. I found it worked better when adjusting the level of the original drawing influence. I found the most success in mid-journey. I inserted my drawings, sketches, and final composition to get an idea of what I was looking for. I then used the reference image along with a prompt to get the images closer to what I had in mind. I produced many different variant options for the images I liked to get them closer to what I wanted.
Reflection: Once I got the hang of it, I found it interesting to create my toy in a newly imagined way. The designs I got were more whimsical and had a different hierarchy of scale, which I thought was really interesting. I had trouble with Vizcom and focused on using Midjourney to create my final image. I decided to alter my final image by incorporating the rattles and adding wheels to the platform. AI allowed me to transform my design and image with a new scale, hierarchy, and form. I think this can be a really valuable tool when ideating or refining work to see more possibilities than what I picture in my head.
Designed by: Francesca Knoetgen
Intent of Redesign: I was curious to see how AI could alter my toy into a larger scaled interactive piece, and wanted to play around with different materials.
Original Concept Description: With Silly Beans, children experience a playful education as they learn coordination, balance, and dexterity. Each bean is made out of a soft silicone material, providing a tactile and safe engagement for young children. The vibrant array of colors is both visually appealing and contributes to their early color development. To begin play, simply snap the beans together using a ball and joint mechanism, and grab hold of the connected bean can, linked by a stretchy string. The materials Silly Beans are created out of make for an easy-to-clean design that allows it to double as a bath or pool toy for more versatility.
Process: For this project, I started by uploading the drawing of my toy prototype, Silly Beans, and playing around with the basic commands of Vizcom. I wanted to gain a general understanding of what it was capable of, including the difference in results of using "render" vs "refine". Once I got the hang of it, I quickly found that instead of using the 2D drawing of my toy, it would be more beneficial to use an image of the physical prototype I made. From here I really leaned into the effect that the "drawing influence" had on the results of my image and prompt. I did a lot of experimentation through using the same (or similar) prompts and changing the percentage of drawing influence, as well as went back and forth between render and refine. Once I had established that I wanted to use the new concept of a scaled up interactive furniture piece for children, I was able to tweak the commands of the prompt to give me a result I liked. After landing on a new toy design through Vizcom, I switched over to Midjourney to see what it could do with it. I was curious to see how it would take the simple furniture design, and make it more playful and child focused. While it did create very fascinating designs, I found they were a little too drastic from my original concept to use.
Reflection: At first I found Vizcom to be quite frustrating as I could not seem to put in the right prompt to give me the results I was looking for. No matter how specific I got about material, color, or shape it still was not providing me with outcomes that reflected the prompt. However, once I switched the image I was using, and played around with the drawing influence percentage, I began to notice a prominent change in my outcomes. I really enjoyed how Vizcom allowed you to maintain as much of the integrity of the design as you wanted because this allowed for you to have more of an influence on what the result would turn out as. I think that AI is a great tool to utilize when in need of a creative outlet to advance your personal designs and ideas to the next level.
Designed by: Hannah King
Original Concept Description: The original toy I designed was a sensory based toy and vessel inspired by the needs and behaviors of neuro divergent children. The main toy base was a large wooden UFO that the child could keep their sensory friends safe inside for whatever journey awaited them. The sensory toys were three separate soft goods "alien" creatures of different textures and sounds. A child could use them similar to Barbie's or other figure style toys, or as a comforting companion that could help them stim in times of stress. The UFO was designed as a whimsical and bulbus form on silly little wheels that are housed within the bottom of the UFO body.
Intent of Redesign: My intent during this redesign was to explore other ways the UFO body could have come out. I wanted to explore similar and different visuals and materials and colors while learning more about prompting AI.
Process: I began this process with sketches, a 3D model, and images of my UFO as well as verbal prompts to see what I could get AI to elaborate on. For me, AI didn't seem to understand sketches or 3D models very easily so I stayed to primarily my images and written prompts. I had more success in receiving variations from Midjourny than I did from Vizcom so I waited to use Vizcom until I had a rendering that I favored. From there, I explored a few more slight visual paths before deciding it would be easier for me to draw in the lines I was wanting than it would be to get AI to understand the "sketch" idea I was trying to get.
Reflection: I like aspects from many of the revisited forms that I got to explore, but at the end of the day, I still enjoy the one I produced without AI. I think I may go back and make a variation of it if I get the chance, but that would be for me to push the fabrication side of the toy. I think it was funny seeing what AI put out from the original and a few guiding words. Regardless, many of my outcomes where off base and not what I had in mind at all. I can always keep working on writing prompts, but I don't think this is a way I would enjoy using AI within my designs.
Designed by: Lincoln J. Ahn
Original Concept Description: This toy is designed to encourage skills in color recognition and construction. The form of the snake adds characterization and a large face that the child can find pleasant along with supporting the long, reconstructable form. The goggly eyes are attractive to younger children. The main segments are airbrushed in the primary colors to teach the concept of color and plant the seeds for combining them in the future.
Intent of Redesign: A full resin version of my segment snake toy reimagined for a future with only AIs and androids. The color aspect has been dropped since recognition of color is easily downloadable. The polygonal face and plastic support eyes are meant to appeal to the new android audience. The overall design is adapted using AI from a famous 21st century designer.
Process: I used the final image of my toy project as the base for my AI augmentations. I started out in Midjourney but stopped because it kept giving me images that were barely recognizable. For this reason, I switched over to Vizcom since it has a toggleable bar for image influence. Using this, I experimented with different material and style descriptions until I got something that I was happy with. However, a consistent problem that I encountered was the generation of an arm sprouting from the head of the snake. I used photoshop to edit this out and add a simple text.
Reflection: This process was a lot more frustrating mechanically compared to the last one. The quirks of AI were out in full force and without photoshop there would be an arm in my final version. I think that the segmented and novel appearance of the toy made it harder for the AI to understand, and the characterization of "snake" was completely lost in Midjourney. I would definitely choose a different project to put into these AI engines, perhaps ones with more recognizable functions.
Designed By Jessica Angst
Original Concept Description: Leaning into the sound aspect of development, I started researching wooden slit/key drums as well as different wheel options since I had been interested in the movements that could be created. After sketching, I worked off of an idea of a frog and leapt with it.
Intent of Redesign: I thought that making my previous toy larger scale like a car could be interesting and wanted to explore what AI could come up with further.
Process: On Midjourney I uploaded my image as well as started to play with some wording and phrases. I tried using lists separated by commas this assignment vs last where I did sentences, and it took a few tries to get in the direction more of what I was imagining. After I got an image I liked, I then took it into Fresco and removed pieces I thought seemed too random and added some strokes, blending, and a steering wheel.
Reflection: Surprisingly, the tries that had fewer descriptors with the image link produced images that I thought were more interesting and align with what I had in mind.
Designed By: Isaac Moyer
Original Concept Description: One of the biggest goals that I had with my toy was how can I allow the child to be creative with the toy and not get tired of it and keep their interest for long periods of time. In order to do this I wanted to activate that pulling feature by having the kids pull on a platform to launch a ball into a customizable track that can be attached and detached easily. The ability to change where and what track pieces gives the kid the freedom to explore what works and what does not, as well as allowing them to be creative.
Intent of Redesign: For my redesign I wanted to see how AI would handle creating various different forms of wall mounted tracks for the toy, and see how recognizable the generated image would be to my original concept.
Process: I first began with using Vizcom to see how AI would imagine my concept and how it could utilize my poster to reimagine my concept. I played around with just letting it reimagine my poster with just the image and control how much influence the imagine had over the AI generated one. I was not a massive fan of the outcomes as many of them looked very strange and strayed too far from the concept I was going for. I decided to try Midjourney next and this is where I found the most success. By combing my image and using the correct wording I was able to get to a spot that I thought was close to my goal for the redesign. I then took that image into photoshop and used the brush tool to add balls going through the track, and the genrative fill tool to make the scene fell like a more finished product.
Reflection: Overall I think this was a good exercise in exploring AI and it powers and limitations. This was definitely harder than the first project and there were a lot more obstacles to figure out to get the intended results. I realised how precise I had to be in order to get my intended results. There are a lot of limitations with what AI can do but I am happy with the end product.
Designed by: Joe Christiano
Original Concept Description: Originally this toy I designed was kind of a grow as you play toy. It was a modular designed toy shaped as a hexagon that had different senory toys on each panel. These panels could be stuck with comand strips or nailed to the wall. The whole idea was that wall sensory toys are great tools for kids to learn and have fun but I felt like the were to bulky and to permanent. This design allowed for parents to have an ascetically pleasing toy to look at on there wall while also being able to take it off, re organize the panels, take one to go on a car ride/dinner table/belly time.
Intent of Redesign: I really liked my toy concept but I wanted there to more verity of shapes that can go on the wall instead of just hexagons. I really liked my original idea so I kept the same concept and look of the toy while trying to find one with different shaped panels.
Process: I used midjourney for my process. I started out with a picture of my final, then from that I asked it a prompt and regenerated it a few times until I got something I liked. From there I used the image I just generated to give that a prompt and do the same thing over and over again until I got my final picture.
Reflection: I found that it was really hard to control the output of what the AI is going to give you. I felt like most times it didn't even do what I was asking and generated something random. A few times it didn't even generate anything to do with the image I imported into the prompt. Other then that I think this could be a really good tool to use to brainstorm toys. I tried to use a drawing and I felt like that was better at interpreting drawings instead of real 3D models or the actually physical thing. Overall I had a fun experience and would have like to got better outcomes but I think this is a solid tool. I ended up being able to add my 3D fusion file rendered sensory toys into the final image which I made it a lot more of what I wanted.