A voice controlled virtual and augmented reality creation tool where whatever you say happens. Simply say your ideas out loud, and they start to come to life. Use your voice to add characters, objects, and actions, building and interacting with the world around you.
New technologies allow us to solve old problems in new ways. In Moatboat, our goal was to get an idea from your head out into the real world, as fast as possible. We wanted the things that you create to be as alive and interesting as the ideas in your head.
We used virtual and augmented reality to rethink how and what everyday people can create. We broke free from the limits of traditional computer inputs, focusing on how people most easily communicate their ideas now - using language. In Moatboat, creation is as simple and delightful as saying your idea out loud.
In Moatboat, what you create can be quite complex. In fact, you can build entire living systems or even multiple overlapping systems. We gave everyday people the power to create dynamic, interactive simulations. Think of it a bit like The Sims or SimCity. You create the world around you, add rules, and build on what happens.
Everyone can be a creator when building is as simple as saying your ideas out loud. No special design or coding skills needed.
Using your voice, Moatboat allows you to create rich, immersive, interactive words. To add objects, simply ask for them. Use natural sentences such as “I need some sheep.” or “Put three wolves over there.” Use your hands to arrange the objects as you would like. Bring the world to life by adding actions and intent to objects. For example, you could say “Sheep eat grass.” or “Sheep jump over fences.” Use your imagination, and say whatever you want.
How did we deal with the infinite possibilities promised by “Whatever you say happens”?
We created a system of nouns and verbs adjustable enough to be put together in any combination and easily expanded over time. By using low poly models and simplified animations, we kept the system flexible and extensible. In Moatboat, people can dance, but so can trees, houses, and helicopters.
Then we designed a language interpretation system that helped us understand what people intended when they said things like “make breakfast” or “feed my pets.” Unlike other interpretation systems, we focused on creation, considering how people talk about their ideas and intents when they are creating. Machine learning helped us improve this system over time, learning from users as they tried new ideas.
We don’t limit the things you can say and do to reality, because hey, if you want to feed your unicorn cotton candy, go for it. This is about creation, and we don’t want your imagination to have limits. Most innovative ideas came from combining known ideas into new combinations.
Not only was this an important philosophy for us as developers, but also became an important moment of delight for users. Most users start building worlds using sentences they expect to work, following the normal worlds of reality. When they figure out that they can bend the rules - their imaginations open and their faces light up.
Moatboat initially launched on Google Daydream. We later built AR and VR versions for iPad, Oculus, Vive, and Magic Leap. Every version used voice as the primary input but also took advantage of the unique elements of each platform such as having hands, experiencing your creations at room scale, or within the settings of the real world.