One of the nice things about Strange Loop (in addition to the gorgeous venues and of course the amazing people, from the organizers to the speakers and attendees) is that they prioritize posting high-quality videos of all of the talks.
So here’s my talk! It’s forty minutes long and covers knitting, machines, and a handful of languages for talking about both of them. I’ve posted a full transcript below the cut.
Some time ago, Chris asked me to make some sort of game or party activity for her wedding to Rob.
I knew I wanted to continue to explore the domain of low-key/opt-in ongoing party games that help guests strike up conversations, as I did with Secret Agent Party, but the specific theme and mechanic eluded me for some time. A wedding is, of course, not a neutral venue; the project needed to be relevant to both the overall meaning of a wedding and to Chris and Rob’s relationship in particular.
I found myself lingering on the importance of the future, and more specifically the idea of choosing your own future by being deliberate about the values that would guide it, which is central to wedding vows and also what I see as one of Rob and Chris’s great strengths as a team. Two other influences were Chris’s interest in Tarot as a means of generating stories, and the wedding’s visual theme of stars and constellations.
I half-formed many ideas, squandering quite a bit of time, before hitting upon one that I was enthusiastic about: a deck of cards and a fortune-telling device that would take readings from hands of cards that were dealt to it. Attendees would be given two cards and a few coins; they would be required to show three cards to the device and spend a coin to receive a fortune.
Photo by Vincent Zeng
Most of the rest of this post talks about implementation details. Skip to the bottom if you just want to see a couple of pictures of the project in action.
!!Con is a tiny two-day conference about “the joy, excitement, and surprise of programming.” I presented at this year’s !!Con on the topic of knitting machines, and the video of my talk is now available on youTube! I’ve posted a transcript and some images from the slides below the cut.
If you’re interested in finding out more about the various projects I mention, I strongly encourage you to download the pdf version of my slides — almost all of the images are links to the creators’ sites and other information.
I’m in Saint Louis for Strange Loop, a code-centric conference “that aims to bring together the developers and thinkers building tomorrow’s technology in fields such as emerging languages, alternative databases, concurrency, distributed systems, mobile development, and the web.” Although it didn’t really fit in any of those impressive categories, I spent some time on Wednesday presenting a 2.5 hour workshop on my favorite Weird Art Language, Inform 7. As with any new language, it’s hard to learn more than a tiny fraction of Inform in just a few hours, but we managed to cover kinds/properties, basic adaptive text, action processing rules, and new actions. I love seeing what scenarios participants decide to implement, and this group didn’t let me down: we had a slayable Smaug, some cats, a baby in a tuxedo, and a classic “where I am right now” game set at Strange Loop itself.
The workshop was structured around sample code that was introduced chunk by chunk alongside relevant IDE features, with livecoding to work through participant questions when they came up. I also had a brief slide deck to introduce the topic and situate our fairly narrowly focus (parser-based interactive fiction) in the wide world of text-based interactive works. I’m not sure how useful these are outside the context of the workshop, but my sample code, slides, and a simple chart of the I7 action processing rules are all available on github if you want to take a look.
There were approximately 10-15 players at a casual cocktail-oriented birthday party.
When a new player arrived, they received an agent number to keep secret.
At irregular intervals—basically, whenever I felt like it—all active agents received a text message with a new code word. They were encouraged to surreptitiously insert the word into conversation. Code words were not re-used.
If an agent thought another agent was using a code word, they could report the word by texting it in. The accused agent would then receive a text telling them that either “enemy agent [number]” or “friendly agent [number]” had intercepted the message; ideally, agents could use a process of elimination to discover other agents’ numbers and attempt to only use their secret words around established friendly agents.
At some point, I also opened up the ability for agents to text each other based on agent number, so they could message an agent without necessarily knowing their identity.
The endgame was supposed to be that agents would have to reveal an enemy’s identity to qualify for a slice of birthday cake, but my understanding was that the cake was so delicious-looking that they ate it early.
The rules were communicated solely through the text messages the players received.
More recently, I was talking to my friend Rebecca about Hacker School, and I mentioned this game. Rebecca manages the MAKESHOP at Children’s Museum of Pittsburgh and she also loves spies: she was excited to let me know that the next MAKEnight was going to be spy-themed! (MAKEnight is an after-hours 21+ event at the MAKESHOP; it generally features good food, fancy booze, and a variety of fun maker-y tasks.) She thought my game would be a fun addition to the night, and I was delighted to have another test audience.
But the rules needed a rewrite: the original rules weren’t as easy to follow as I’d hoped, and they were written for a small group of close friends, not a larger group of probable strangers. Trying to remember a list of agent identities is much easier with a small group size, and the direct texting mechanic in particular would work best for people who could make guesses about which friend they were talking to. (Also, direct texting has potential for creepiness with strangers.) And I wanted rules that would work better with people dropping in and out.
A friend of mine approached me about helping him with a costume for a wizard character in a live-action roleplaying game. We decided that a cloak was a good place to start, and I immediately wanted to incorporate some technomancer aspects—for example, a 3D-printed cloak clasp. I’ve been getting back into the swing of working with Rhino after upgrading my OS to be able to use the free beta version of Rhino for Mac (which has stabilized a lot since the last time I looked into it), so that seemed like a good place to start. Rhino is very popular among architectural designers and Grasshopper, the visual UI for scripting Rhino processes, has grown an extensive community since last I checked, but what really appealed to me was using a plugin to do some scripting in Python. I’d have access to all of the Rhino commands in a language I already knew how to work with. Using it is relatively straightforward if you’re already familiar with Rhino’s commands+arguments workflow, which is translated very literally into the Python module.
I love the work that Nervous System does with subdivided surfaces, and I’d just overheard half of a lecture on Voronoi diagrams, so I was thinking about Voronoi filigrees. I pictured something kind of spiky and airy, with a bit of blobbiness around the intersections of lines, appropriately organic yet shiny for a fantasy-medieval technomancer.
I didn’t have enough time alotted to make it totally generative, but I wanted two scripts to make the task easier: one for generating Voronoi diagrams out of points, and one to turn lines into 3D shapes. The former turned out to be a matter of grabbing an existing library and interpreting its data. The latter uses Rhino’s “pipe” command, which takes a curve (in the 3D modeling sense: any series of connected points, including line segments) and a series of radii at points along the curve and generates a pinched cylinder. I also merged spheres at the ends of the pipes so that each strut is a self-contained blobby unit.
# In honey.py, the underlying shape-making code:importrhinoscriptsyntaxasrsdefmakeCurvesBlobby(curves,vertex_radius):forcurveincurves:# AddPipe args: curve_id, parameters, radii, blend_type=0, cap=0, fit=False# parameters are the locations along the curve for the radii to apply, where 0 is the start and 1 is the end of the curvepipe=rs.AddPipe(curve,(0,0.5,1),(vertex_radius,vertex_radius/2,vertex_radius))startsphere=rs.AddSphere(rs.CurveStartPoint(curve),vertex_radius)endsphere=rs.AddSphere(rs.CurveEndPoint(curve),vertex_radius)ifpipeandstartsphereandendsphere:pipe=rs.BooleanUnion([startsphere,pipe,endsphere],True)# BooleanUnion args: list of objects to union, whether or not to delete the input# And in blobbify.py, the user interface:importrhinoscriptsyntaxasrsimporthoneycurves=rs.GetObjects(message="Select curves to be blobbified...",filter=4)# "filter" means the object selection only allows for a certain kind of object; in this case, curvesradius=rs.GetReal(message="Enter endpoint radius:",number=1.0)honey.makeCurvesBlobby(curves,radius)
which turns this:
The full code (including things like interpreting between the rhinoscriptsyntax and voronoi modules’ similar but non-identical ideas about how points should be represented) is on Github.
Unfortunately, the timeline for this project is somewhat compressed and the deadline to get the print done in time for the LARP snuck by us, so I’m putting the design of the clasp on hold to get the rest of the cloak out the door in time with a placeholder clasp.
Python/Rhino things to watch out for
Boolean operations (in the 3D modeling sense: adding and subtracting shapes from each other) are just as prone to mysterious failure as they are in plain Rhino. I suspect I’m running into problems relating to coplanar surfaces, which Rhino’s boolean solver hates. I’m “fixing” this by only automating the booleans in each strut (pipe plus spheres) and merging the whole structure by hand. This was never going to be a fully autonomous process, so no big problem there, but it’s something I’ll want to fiddle with in the future.
Another thing is that it’s possible that one source of mysterious failures I saw was an incompatibility between Python’s floats and Rhino’s ideas about acceptable tolerances. I incorporated this:
fromdecimalimport*getcontext().prec=7# need python code to have same precision as Rhinodefdecimate(input):# output a Decimal at rhino-compatible amounts of precisionreturnDecimal(input).quantize(Decimal('1.000'))
and run decimate() on the data I get back from voronoi.py instead of telling Rhino to plot arbitrarily precise lines. It seems to help, but it’s possible that this is pure superstition.
More vexingly, there are some problems with the Rhino for Mac Python plugin itself. A lesser one is that the documentation is not always very detailed, and Rhino’s own functions fail silently; for example, it took me a pretty long time to realize that all of the strings offered up to GetBoolean() needed to be purely alphanumeric, with no spaces, because it would display the prompt properly but just not show the expected tickyboxes. A bigger problem is that the output of print statements only shows up once/if a script has successfully run, which naturally makes it hard to use them for debugging. Another big one is that something in the system somewhere occasionally decides to ignore new code in favor of a cached version hidden somewhere, which leads to a lot of debug frustration as nothing you do can possibly affect your bugs until you remember what is going on and reboot Rhino. I’ve mentioned both of these issues on the forum, so perhaps I will have some solutions soon.
Lately I’ve been wanting to try using Spoonflower, which is a company that does print-on-demand fabric. (Like Lulu, which does POD books, you can either just print for yourself or receive commissions on sales by making your work public.) I ordered a swatch book from them, and I’m impressed with the range of fabrics and the printing.
Handing Spoonflower the output of the BASIC code is unlikely to go well, though; they’ll gamely accept either raster or vector images, but not text, and certainly not text in a relatively obscure encoding.
During my time working as a Teaching Artist at the MAKESHOP, I liked to have a project in my own hands so visitors didn’t feel like I was looming over them. Finding a good medium was tough: it had to be something that I could put down quickly to give help as needed, something that wouldn’t be destroyed immediately if I left it on a table, and preferably something that could fit it my pocket. Although the last time I’d tried to embroider anything was probably when I was about ten years old, embroidery fit the bill and I ended up doing a lot of it.
I had a handful of constraining rules for my embroidery projects. I used medium-weight preferably unprinted woven fabrics cut into swatches about six inches wide, quadrupled sewing thread instead of embroidery floss, and no hoop (for better pocketing). I tried to start and finish an embroidery on the same day, and everything was freehanded (no sketching in pencil) and improvisational.