Introduction: Memoriae

For my final project, I wanted to use the idea of flocking agents to "carve out" geometry or create topology textures.


Transparent PLA

Rhinoceros + Grasshopper

Meshlab, MeshMixer

Adobe Photoshop

Step 1: Points As Agents, Part 1

My initial thought process of going about this work was setting up a simple particle system in the Rhino/Grasshopper environment. This was a fairly simple setup, and I was able to get each agent as a point starting in a random x and y position with a slider for number of agents. I also knew that at some point, I would have to expose parameters for the three parts of the boid algorithm that my flocking algorithm was based on, which I set up in this initial phase (however, they were non-functional).

I then needed to work on moving the points. How I usually approach this (in pseudocode) is:

for each agent: 
	agent[i].position += xComponent, yComponent #update agent position

However, what I failed to realize until this moment was that on program run, the code loops 60 frames a second. However, my Python script in Grasshopper does not loop 60 frames a second. It is a static script that gets executed (and gets recompiled every time a slider is changed).

Therefore, animation is a huge challenge for me on this new platform. Mert suggested this website ( for animating using sliders, so I dived into this source.

My first attempt was to have a slider called "time" that would specify how long the code would be looping for. My approach in the script looked like this:

for i in range(0, time): 
	move() #function to update agent position based on pseudocode above

However, this outcome was not positive. It essentially did nothing. I therefore tried to use a boolean "switch" to trigger animation while using a while loop (in pseudocode):

if booleanSwitch: 
	while counter < time:

I crashed my program a couple times when messing with a while loop (by accidentally programming an infinite while loop) , but then when the code ran fine, I still did not see what I expected to see.

I then realized that the python script seems to recompile every time a slider is changed. Because I was initializing my points at random positions between point1 and point2 (specified in Rhino), every time I changed a slider, the points would jump to a new random position (instead of its original position being updated by my move function).

I arranged the points in a grid and then tried breaking down the x and y components by using "increment" sliders, which I could animate to change the points. Now, it started to look better. By adjusting the xIncrement and the yIncrement sliders, I could move the points to the left and right. I added an "agentWeight" variable that was randomly picked when the agents were created, allowing a bit of movement away from the specified grid pattern. However, this was still not the animation that I wanted. I was visualizing the agents moving around their grid "bounding box" (specified by points 1 and 2 in Rhino), where the user could clearly see the agents traveling in their own unique patterns as a slider was incrementing.

Step 2: Sidetracked Playing

While I was trying to get my animation working, I was also trying to visualize how a geometry would start to look like. I played around with some Grasshopper objects by browsing and picking what seemed interesting. I was able to use the InterpolateCurve object to get a curve from a set of points, which I was able to then extrude into a geometry.

Okay, now back to my goal.

Step 3: Points As Agents, Better and in 3D Space

I updated my script to reflect a 3D space, arranging the agents at first in a grid pattern and using a speed slider to specify the incrementation amount in the move function (instead of x and y sliders, which made the whole grid translate by the x or y value). This yielded a better result than before; however, the points were "jittering" around their original location. I also realized that this grid system was not what I was going for. I truly wanted the agents feel more organic in nature, so I needed to find a way to clearly convey this using random positioning.

I then played around with some mesh triangulation objects in Grasshopper as a bit of a mental break. The Voronoi was cool because you can see the diagram change as the points moved. Now, back to the goal.

I didn't get rid of the grid arrangement completely, so I added a toggle to set up the agents either randomly or in a grid pattern. I then tried an approach where I keep track of the "history" of the way that each agent travels by using a 2D array, which you can then animate using a slider that moves through the time points (essentially like a flip book). For the grid pattern, this works perfectly. When speed = 0, the points don't move when animating the timePoint slider. When the speed value changes, the points jitter in proportion to the value.

However, in the random view, this is not the case (because of the problem with the agents being randomly positioned every time the slider value changes on animation). I therefore had the idea of trying to separate the script into two processes: one to place the agents initially and another to move them. However, I decided to wait until the class check-in to see if there were better suggestions on how to proceed.

Step 4: Read File in Grasshopper to Place Agents As Points in 3D Space

After the class check-in on Thursday, I received some great suggestions on how to move forward with my project, including a grasshopper plugin for Windows that allows for live message sending using OSC. I was really interested in this option, but I wanted to start a bit more basic given the time crunch.

I therefore went with the suggestion to read in a file containing the agent positions across the duration of the simulation. I added some code in my previous algorithm to output all agent positions in 3 files (x position, y position, z position) for my simulation over one minute (a time period I chose based on the simulation). My simulation parameters were tuned for number of agents = 500, as if I went any lower, the system could not reach an equilibrium state and all agents would die off within ~20 seconds of program runtime. I decided that if I wanted to visualize fewer agents or constrict the time period, I could do so in grasshopper post-hoc.

I spent half a day wrangling with this in Grasshopper before I realized that my problem was changing the type access on my Python script (the Read File object outputs a list, so I needed to change the input parameters of the Python script object to "List Access" instead of the default "Item Access"). After I made this change, I was able to quickly visualize my agent's starting positions.

Step 5: Adding a Time Slider for Animation

I added a time slider which, upon animation, increments the "history" index to move through the agent positions across time. However, because the script is creating points every time the slider is updated, it is very slow. I therefore placed the animation portion in a separate grasshopper script, which helped with timing substantially but still doesn't create a visually cohesive effect.

I will proceed with building the geometry and then visualizing the final product, rather than trying to visualize the entire simulation.

Step 6: Visualize an Agent's Path With a Curve

Now for the fun part! I decided to take each agent one by one and try to visualize its full path. Instead of using time as a slider to animate through the agent's motion history, I used agent number as a slider and output it's entire history as a list of points (60 frames a second * 60 seconds = 3600 points, to be exact!). The result, although simple, is extremely gestural and exciting. I actually prefer this new way of visualizing my previous algorithm, as it is now clear what path the agent was taking (whereas in the video of my algorithm from last quarter, everything was flying around on screen and you couldn't really "track" an agent). Furthermore, even though you don't see the other 499 agents in this space, this one agent's motion is still representing a memory or "history" of its interaction with every other agent during the simulation run-time. This is one step closer to my aesthetic goal, where a single sculpture is a memory (which inspired the title of the piece, memoria).

Step 7: Create a Geometry, Import Into Cura, Rinse and Repeat

I then used the Interpolate Curve object in Grasshopper to connect all 3600 points of an agent's trail and used the Extrude to a Point object, which I specified as the origin to get an interesting result. I then used the Offset Surface component and baked the geometry to yield the rendered result.

However, when I brought it into Cura, the print wasn't viable because the walls were too thin. I therefore had to take a step back to the drawing board to figure out how to make the interpolated curve thicker in order to extrude properly to the origin. I also realized (upon exporting) that the scale was really small, so I multiplied each point by a factor of 50 in the second python script.

Step 8: Final Geometry, Imported Into Cura!

I was able to use the Extrude Curve component in Grasshopper to extrude the interpolated curve with a line that I drew in Rhino, which specified the extrusion thickness.

This worked when brought into Cura, yielding a print time of approximately 15 hours with the settings specified in the image. I decided to print this first test print fairly small scale in order to have a day of printing time in case I needed to make adjustments. If all goes well, I would like to print this object in a much bigger scale, as I believe the rendered visual is fairly compelling, yet I have some qualms about printability (if it is viable as a print, I am anticipating some serious issues with removing support structure).

Step 9: Failed Print, Next Iteration...

As expected, the print was not viable. I woke up in the morning with the nozzle moving in midair and nothing being extruded, so I will make some adjustments to the geometry (by playing around with boolean unions/subtractions or by trying other Grasshopper components).

However, if you squint a little bit around the support structures, the design looks kind of cool.

Step 10: Geometry Using Surface Grid Component

After doing some googling, I was interested in trying out the Surface Grid component, which creates a surface out of curves. After playing around with the U Factor parameter slider, I found an interesting shape that I was thinking could be healed in MeshLab and then printed successfully.

Step 11: Healing in Meshlab

I tried a lot of filtering to heal the mesh, including the Close Holes filter (after fixing manifold vertices by removing faces), Laplacian smoothing, merging close vertices, surface reconstruction VCG (which was the only surface reconstruction operation that didn't crash meshlab), and snap mismatched borders. However, I was unable to properly heal the holes in the mesh. So back to the drawing board.

Step 12: Merge Faces in Rhino, Remove Offset

After more googling, I found out that Rhino has a Merge Faces feature that operates on a poly surface. I therefore used a Mesh Smoothing component and then a Mesh to Poly Surface component in Grasshopper, baked the result, and used the Merge Faces operation in Rhino. However, while I thought this might do the trick, the operation crashed Rhino.

My next thought was that the Offset component was causing my issues (see Step 10, differences between the photo with the form in green and the form in red, which was after the offset). When I removed the Offset component and baked the mesh geometry, the surface of the shape was intact but there were lots of holes, which would obviously not be viable for a print.

Step 13: Boolean Difference With a Circular Geometry

I decided to try to do a boolean difference (subtracting my geometry from a sphere) in order to try to get a hollowed out form for better printing. However, when trying to perform boolean unions and differences with a sphere, the operation fails in both Rhino and Grasshopper. This is because I have a bunch of self-intersecting curves due to the complex motion of the agent.

At this point, I feel like I have exhausted my knowledge regarding the approach that I want to take, particularly in the time frame that we had. I was really excited about the first flower-like geometry that I had created, and I am almost envisioning a paper sculpture due to its delicate form. However, I would have loved to see a finished printed output in the translucent filament that I have. I would love any feedback or ideas on how to proceed!

Step 14: Playing With MeshMixer

After Mengjia's feedback in the final project presentations, I wanted to try using MeshMixer to fix my model in order to make it printable.

I used the Close Cracks function first, which didn't really do much visually. I then played around with settings on the Make Solid feature, but unfortunately it gave me a worse mesh than I had at the start, which wouldn't be feasible as a print. I checked the mesh by importing it back into Rhino and looking at the rendered result, which turned out to be less existing than my original form.

I will try again to print out my original model in a bigger size and observing when and where it fails. It may have been a problem with my filament breaking off, not a problem with print feasibility (the same way Sonia's second print failed). If this doesn't work, I will try a new approach: limiting the amount of input points and developing a system to make the print feasible.

Step 15: Re-Printing

I decided to revisit my print in it's original form (just bigger), as upon further investigation, my printer nozzle was clogged (which is probably why it stopped extruding mid-print).

However, because of the larger size, the print will take at least two days to complete. I will update this documentation with the final result!

Update: I am a day into the print and it is looking really great at the larger scale! I think the failed print earlier was due to a filament extrusion issue, not because the geometry wasn't a "viable" print. However, the support structure is pretty integral to the print, so it may be difficult to remove at the end. I have included some print-in-progress pictures.

Step 16: Images of the Final Print!

The final print turned out much better than expected, although there are some quality issues. I also will probably not be able to remove the support structure from the geometry (when I tried removing the raft from my failed print, the filament broke off improperly, causing damage to the print). However, I would love to potentially use a tool later on to file away the supports. If anyone has any thoughts, please let me know!

Be the First to Share


    • Backyard Contest

      Backyard Contest
    • Summer Fun: Student Design Challenge

      Summer Fun: Student Design Challenge
    • Maps Challenge

      Maps Challenge



    1 year ago on Step 16

    Awesome work and process of creative discovery! Your model reminds me of a selenite desert rose! The agent-based design process is a great idea. I was also curious about running time-based processes in grasshopper, I think you found a good approach. Nice project!


    1 year ago

    Very compelling process and beautiful results, Stejara! I love the generative approach that you took and the way you decided to visualize the flocking algorithm. I think that this approach could lead to many interesting artifacts ranging from abstracted shapes to functional objects. I find it very inspiring. And wow, this final print!