Smart assistants are everywhere – and their numbers are growing fast. Almost a quarter of UK households already own smart speakers and we’re getting more and more accustomed to seeing such technology as an extension of our own minds and bodies.
And they’re no longer limited to speakers and phones. Smart assistants are now being used to manage our memory, identity, and decisions across the myriad contexts in which we access information and internet services, from our homes, to our cars, to wearables.
Yet, with 80% of Americans experiencing a type of tech frustration every day, it seems technology companies aren’t getting things quite right. One issue driving this is a failure to properly understand the situations and environments in which customers are using a brand’s products or software services.
To help, companies must think about ‘placesonas’ – an extension of traditional persona activity, but rather than thinking about the ‘who’ they must instead consider the ‘where’.
Placesonas are all about defining profiles for how users interact with smart assistants in different places. People adopt different behaviours towards tech depending on their environment and intent, so companies need to deduce the where, when, what, and how of these interactions in order to make them as effective and relevant as possible, and to maximise opportunities.
For example, where are users interacting? Is at home? The library? The bus? And when are they interacting? Is it in an intimate, private space or out in public and sociable? Are they in a calm, intimate environment or a noisy, high-stress situation? This all has a huge impact on the way people interact with smart assistants, and what they’re likely looking to achieve by doing so.
Understanding this and acting accordingly is absolutely essential for modern brands. So let’s consider how it could be put into practice.
Imagine if retailer JD Sports optimised the way it speaks to consumers in-store; having the expertise of JD Sports playing directly in your ears, based on what you’re looking for. Just plug in your hearables and Google Assistant will do the rest to deliver a frictionless, personalised retail experience, while keeping your eyes and hands free to browse. Scanning a product QR code for sneakers you’ll hear how they fit with your own training goals, with helpful reviews and recommendations tailored to you.
Building on the concept of Data ID Stores, in future these smart assistants will integrate with retail systems directly, connecting users to store stock, payments and pre-orders. In this vein, Nike could add a smart assistant to its SNKRS app, offering tailored recommendations as users enter a Nike store based on previous purchases or behaviours. The Disney app could offer exclusive AR filters to children, or unique guided experiences like treasure hunts within its stores or theme parks, using voice actors from its films to guide the children – and certainly some adults – on a curated journey.
Let’s also consider hospitality. Could Hilton Hotels, for example, implement placesonas? When you’re exploring the area around your hotel, you don’t always want to look like a tourist, wandering with your phone out. Hilton could help by sharing interesting history and culture points via your headphones as you explore, or providing GPS-based directions to restaurants or bars based on your preferences.
At present, brands are only just starting to explore and engage with the opportunities presented by a combined physical, visual, and audible experience. But the coming months and years will be incredibly interesting for brands as they realise that context is everything. The potential of placesonas is endless, offering a real opportunity to provide phenomenal content and experiences in a way that contextually works for the user and adds value to their experience.