What if you could go on your own imaginary safari, all from the comfort of your home?
Well, with Safari Mixer, you can! Safari Mixer is a fun experimental Action for the Google Assistant, created by rehab. Mix and match the body parts of different animals to create extraordinary hybrids with their own names, sounds, and even personality traits.
With this experiment we wanted to showcase the versatility of the Google Assistant with a fun, interactive, experience that would appeal to all audiences and inject a little fun into the everyday.
Inspired by the ability to create an Action with an easy handoff experience between voice-activated speakers and the phone, we asked ourselves: “How can we provide a great audio first experience and add to that with visuals?” Most importantly, "How can we make a seamless journey for the user that comes to life across multiple devices?"
The solution? We designed an Action paired with a web app that allowed players to voice or type parts of their favourite animals, then see unique creations come to life right in front of their eyes - capturing all the magic of safari.
Using machine learning, we were able to simply create thousands of unique GIFs and algorithmically generated sounds that make you want to keep creating. You can even switch to your phone to keep exploring on the go.
Safari Mixer is powered by ‘Actions on Google’ - the platform that allows developers to build for the Google Assistant. We used Dialogflow to interpret what the player says, Firebase Cloud Functions to build complex responses and Firebase Database to save data. This combination made for a hyper-personalized experience.
To produce multiple animal GIF combinations with ease we used After Effects in conjunction with custom .jsx and Python scripts. And to add that final touch, we used Magenta machine learning tools to create the unique animal sounds. And once your creation is finished, surface switching allows us to send GIFs to the Google Assistant on Mobile from a voice speaker like Google Home.
To create Safari Mixer, we had to address 3 main challenges; GIF generation, sound generation and web app production.
No safari is complete without a whole host of extraordinary animals and ours is no different. With 24,360 animated animals available (and more on the way!) the challenge was to find a way to auto-generate them and allow for different shapes and body parts to be anchored and bound together with ease.
We tackled this GIF generation using a .jsx script that automates creation of .tif images from an After Effects project. A Python file kicks off an After Effects process to generate the .tifs using the .jsx script and corresponding After Effects scene. And the Python script generates .gif files from the .tifs and uploads them to a Google storage bucket that the chatbot and webapp link to. Then you’re ready to explore!
Machine learning techniques offered a way for us to mix audio clips linked to the head and body parts of animals to generate brand new hybrid noises.
A Python3 script generates mixed animal sounds using deep learning and reinforcement techniques via TensorFlow implementations powered by the Magenta research project. The result? Suitable safari sounds for incredible animals.
Lastly, the web app gives a home to the imaginary safari and brings all the discoveries together in one easy place - the Mixipedia. Players can search through all the animals found to date and add their own to the library.
The web app is a React static page that is hosted on Firebase and built using this React Firebase Starter template. If you’re looking for inspiration, Mixipedia is the place to go.
With so many extraordinary animals packed into the experience and a host of auto-generated fun facts that bring them to life, Safari Mixer is something that all the family can play.
Whether you’re a first time explorer, or seasoned expert - there are plenty of hula hooping Zebigeraffes and ticklish Tiguppyriches to be found.
Simply say “Hey Google, talk to Safari Mixer” to give it a go. You’ll never know what you might find!