Discover more from Bits to Atoms
Artisanal Intelligence vs Artificial Intelligence in Design
Bridging craft and computational design with a spoon.
When we think of the design of functional objects with generative, computational or AI assisted design our inclination is to consider ‘high performance’ applications like rocket engines, heat exchangers and medical implants, but what if we first consider ‘high touch’ objects, something we interact directly with the most intimate parts of our body, such as our hands and mouth, maybe a spoon?
And what if instead of relying on the machines for ‘digital manufacturing’ of our AI generated designs we took the proposed geometries to skilled artisans to apply generations of experience to elevate the design beyond the pure visually derived ‘results’ of AI algorithms into a tactile, human experience?
Inspired by the Italian architect Ernest Nathan Rogers who wanted to design everything “dal cucchiaio alla città” (From the spoon to the city), Matteo Loglio and the team at oio, in collaboration with Giosampietro developed Spawns with a process they call Artisanal Intelligence.
Matteo was kind enough to discuss the project, their experience exploring AI in the design process, and how generative AI and traditional crafts may be combined to explore product design and manufacturing.
What brought you to use ‘Artisanal Intelligence’ to explore the design of a ‘spoon’ with AI?
We have always been really interested in the role of new technologies as part of the design process and especially when it comes to generative tools and the “tension” it creates around who is actually the creator. While often the discussion around the use of AI in design revolves around automating the process or making it more efficient, like the examples of Autodesk’s Generative design, we were interested in using these tools to augment our process rather than giving us final ”perfect” outputs.
In the past years we have been experimenting with various Machine Learning algorithms like GAN (Generative Adversarial Networks) to help us generate images of products, and we realized how much actual work and decisions are hidden in the process. From selecting the right images for the training of these algorithms to selecting the most interesting outcomes, and everything in-between.
It’s not really as automated as it seems, but most importantly it still requires a sort of expertise in understanding how to tweak these algorithms to come up with more interesting results.
As our practice is centered around developing future products and tools for the real world, a purely conceptual experiment was not enough, we wanted to turn it into an actual real-world artifact that people can use and buy. Additionally we thought that pioneering a full process end-to-end to create a product from scratch with the help of an artificial intelligence would give us valuable insights to repeat the same journey potentially with more products.
Can you describe your workflow, how you went from 2D images to physical objects, what software did you use at the bits stage, and processes for the atoms?
First of all we curated our own dataset (basically an archive of images) of spoons and cutlery from various eras. Then we trained a GAN using Runway, which allowed us to generate a lot of possible design inspirations. These images were then fed into other algorithms, trained to recognise how much “spoon-like” these images were. Out of this selection we then applied our own and very human point of view to select a few that were interesting, and by “interesting” we mean that would either show some unexpected aesthetics or functionalities. Once we had a selection of 2D images we then used a custom built pipeline in Houdini to create 3D objects, together with our partner in the project Giosampietro. We mimicked the process of cutlery making in a parametric way, something that we called a “digital press”.
We finally had to create the production-ready cad files, and to achieve that we did actually put in quite a lot of manual work, tweaking the meshes to have the right thicknesses and feel. We printed many versions of them in different materials, but then the big step was to work together with a real silversmith in Italy to go into wax molds and then finally metal casting.
Houdini has not been on a lot of people’s radar when it has come to computational DfAM until recently, what in particular drew you to Houdini that you could not find with other software?
It’s honestly impressive what you can achieve with Houdini, compared to the other 3D modeling softwares that we are used to. The ability to simulate materials and processes helps to create custom mini tools and pipelines, which have been extremely rewarding and surprising. Our partner in this project, Giosampietro, is an expert in Houdini, and the way he used it proved it to be the most flexible tool to achieve what we wanted.
What were the unexpected challenges faced in going from the digital to the physical, did you receive any pushback from the ‘traditional craftspeople’?
Well, of course at first it was very interesting to see the reaction of a traditional silversmith from Italy when we pitched our project. The final products that they helped us produce looked like spoons, but are nothing like the spoons they are used to making, so it created a lot of discussions with the people making the molds, who wanted to change details here and there. We learned a lot from them in terms of what makes a spoon feel like a spoon, from the treatment of the edges to the balance and finishing.
There is a long history of tension between craft and design dating back to the industrialisation of production with grand statements by WIlliam Morris and the biased opinions of craftspeople, designers and ‘art critics’ ever since. How does this project and ‘Artisanal Intelligence’ play with the craft/design divide?
It probably sits right in the middle. Maybe because some of us come from Italy, historically this divide is less relevant in the way we look at design. The designers of the past, at least in Italy, were strongly connected to industry, but the industry that they were working with, like the furniture and lighting brands, was quite crafty. There is always the story of how the person running a machine was influential in the last iteration of the lamp or the chair of Achille Castiglioni or Joe Colombo.
Our main goal with this project was to try and put closer together the seemingly far away worlds of Generative AI tools and the world of artisans and craft, to create that connection and eventual discussion.
That’s the reason why we didn’t just stop at 3D printing, but went all the way to real production with humans and the extra layer of their point of view. We believe that ultimately something that is designed in a generative way doesn’t mean that it cannot be done by hand.
Another project that seems to flirt with art, design and generative algorithms is the Evolving Furniture project, what was the reason for exploring this Everyday Experiment?
Updatables is a speculative furniture collection we designed last year, in collaboration with SPACE10, IKEA’s research and design lab. We were interested in exploring alternative narratives to promote recycling and sustainability, beyond the most common approaches around reduction and conservation. While researching natural evolution and biomimicry, we had this idea of creating artifacts that could imitate natural selection, using genetic algorithms.
That’s how we came up with the concept for Updatables, a new type of furniture that evolves over time by updating itself using parts from the IKEA catalog, to survive in our living rooms and ultimately be more sustainable. We created our furniture evolutionary engine, mostly as a proof-of-concept but then we also developed the actual furniture collection designed by natural evolution.
While this iteration of the algorithmic design may not be able to really export a functional object (or at least a comfortable one) what are your thoughts on who is the author of the final outcome?
Is it you the designers of the experiment, the software algorithms that create the 3D file, or the human, who interacts with the experiment, chooses an evolution and downloads the file to ‘manufacture’?
And, who ‘owns’ the copyright of that design.
This seems to be a recurring question when talking about artificial intelligence tools and artifacts. While we are not specifically experts on the legal side of AI involving intellectual property and copyright, we have a point of view.
For us AI tools are just tools, we try not to project agency on the tool itself. Maybe one day we’ll be living in a fully developed science-fictional world where sentient AIs could create, publish and sell artifacts without any kind of human intervention, but for now the tools we use still need a human agent to be used and eventually publish the resulting designs.
Some of the generative tools like Midjourney and DALL-E clearly state in their T&Cs that they have unlimited copyright on the created images, but if you use them in more complex processes then things get fuzzy.
It will be interesting to see how the legal side of things will evolve in the next few years too.
OpenAI Text is BLOWING UP right now, the MidJourney journey is well underway with over a million users churning out graphics and NVIDIA generative 3D experiments are just starting to see daylight. What are your thoughts on the emerging and near future of AI for ‘functional’ 3D applications? (I do not ask long term because that would be cruel (unless you wanna)).
We think it’s safe to say that the automation trajectory will not stop, as it’s just getting started. Just like Gmail provides auto completions, we’ll see the same pattern develop in design softwares, probably even more aggressively. It’s already happening with software development, with GitHub Colab and ChatGPT. Ultimately we believe it’s a good thing, especially for low-level repetitive tasks.
In the end we love design because of its creativity and expressive potential, and if we could automate some of the most boring tasks, then we can just focus on the actual complex creative challenges that an AI alone simply cannot solve.
That was fun, what next for oio?
Certainly not AI forks 🙂
We’ll continue to design future products and tools for a less-boring future, for humans and beyond. We have a couple internal experiments that we cannot wait to put out in the world, together with a few more collaborations, so stay tuned!
If you are interested in collaborating on a project with Matteo and oio team reach out to explore how their experimental technology forward practice could help bring together the human and machine for your future products.