
The Augmented Human: How Technology Began to Merge With the Body

1. A New Dawn
When Steve Jobs introduced the world to the iPhone in January 2007, he claimed that it would ‘revolutionise’ the industry.1 He wasn’t kidding. The smartphone revolutionised the world, spurring both a decade of innovation and the creation of a new technological paradigm. For years before, computers had meant big beige boxes that whirred away in our offices or studies—cumbersome machines that we interacted with using a keyboard and mouse. In time, we switched to semi-stationary laptops, which gave us a portable gateway to the internet and allowed us to work remotely, but nonetheless had to be lugged around in a bulky shoulder bag.
The launch of the iPhone in 2007—and the raft of rival smartphones—heralded another dawn for technology: computers that were mobile and in ever closer proximity to us, devices that we could interact with through touch or gesture-enabled displays. Still, even Steve Jobs couldn’t have imagined the role these devices would end up playing in our lives. Today, the global smartphone market is approaching saturation: in many countries, two-thirds of people own a smartphone, and three-quarters of our time spent online is thanks to the phone in our pocket.2
However, a new era is dawning, one that will further change how we interact with—and understand our relationship with—computers. Some of the technologies that look set to define this era are fast becoming commonplace—such as augmented reality (AR), voice technology and wearables. Others are less familiar—like extended reality, diminished reality, or object and pose detection. But what unites them all is clear: they will bring computers ever closer to our bodies, smoothing the interaction between people and devices.
SPACE10 is a research and design lab on a mission to create a better everyday life for people and the planet. Exploring the increasing role technology plays in our lives is part of that mission. This report represents a step in this exploration. Indeed, as we collectively wade into uncharted waters, the report anticipates likely components of the next paradigm shift. It recaps how we got to the point where technology and humans seamlessly interact. It imagines how computer interfaces could soon change; how augmented intelligence might transform technology; and how organisations and businesses are getting a foothold in this complex but opportunity-filled arena. Finally, it asks a simple question: where do we go from here?
2. How did we get here?
Innovation tends to come in sporadic bursts—and typically starts slowly. Although the first mechanical computer was created way back in the 19th century—leading to further iterations, such as a programmable computer, by the 1930s—the first truly digital computer resembling what we still use today was finished in 1946.3 It took three years to build, in a back room at the University of Pennsylvania: the brainchild of J. Presper Eckert and John Mauchly, the Electronic Numerical Integrator and Computer (ENIAC) took up 1,800 square feet and was powered by 18,000 vacuum tubes.4 And once it was completed, it took another decade to provide it with memory capability.

An American soldier with an ENIAC computer in 1946 at the Moore School of Electrical Engineering.
However, the ENIAC was little known outside university laboratories, many of which built their own computing capability in the ensuing decades. It wasn’t until the late 1970s and early 1980s that most people knew what a computer was—and, even then, only the most forward-thinking techies had one at home. In fact, as recently as 1985, just 12 percent of British households5 and 8 percent of American households owned a personal computer.6
By the 1990s, though, the PC revolution was hitting its stride. A beige box—and later, thanks to Apple’s rethinking of tech aesthetics, a sleek, colourful computer—took pride of place in many bedrooms, studies and home offices. Not long after the turn of the millennium, half of all British homes possessed a computer; by 2005, it was two-thirds; by 2009, three-quarters; and today, almost 50 percent of households worldwide have a computer.7 As for smartphones, the latest version of a ‘computer’? It’s expected that by the end of 2019, smartphone ownership will surpass 5 billion—a staggering number, considering the global population hovers at around 7 billion people.8

Illustration — Marie Mohanna
Slow, speedy then slow again: the S-curve
From the home computer to the smartphone, technological developments have tended to follow the same growth cycle: the so-called S-curve, a smooth line charting the pace of development and adoption of a particular technology.9
The S-curve goes a little something like this. At first, progress is slow—in some cases, painfully so. Only the most obsessive individuals are interested in the technology and development moves at a snail’s pace. Then something happens. A critical mass of supporters is reached, and a concentrated effort is made to accelerate development. The line shoots upwards, reflecting both adoption and development. Then the line levels out as the market becomes saturated and the technology matures. Improvements are incremental; giant leaps become hops.
This process occurred with personal computers, and then with laptops.10 And it happens on a macro level, too. Look back at the history of computing, and you can see a giant S-curve sweeping back through time. Zoom in and you can see smaller paradigms—individual eras in which one technology dominated, each of which had an S-curve of its own.
A cycle of revolution
The early computers were revolutionary, albeit not especially user-friendly. ‘They took over white-collar jobs and penetrated white-collar environments,’ says John Vines, professor at Northumbria University’s school of design. ‘They were massive bits of kit, which sometimes didn’t even have screens attached to them, and were primarily in workplaces.’ Most users were specialists and dedicated long periods of time learning how to bring the abstruse and obtuse technology to heel and use it to facilitate processes and labour-intensive tasks.

The first computers were revolutionary, but almost exclusively geared towards specialists. Illustration — Marie Mohanna
In many ways, these early terminals were the first examples of augmented intelligence, as outlined by Douglas Engelbart in 1962 in a journal article that’s credited with establishing the conceptual framework for augmenting human intellect.11 Too smart for the majority of people to understand, too dumb to work without significant human input, these early computers were aimed solely at the workplace. They outsourced jobs too laborious or mentally complex to do in a timely fashion—like word processing, which had been previously manually performed by secretaries.12
When the next giant leap happened, it was a game changer. The terminal became a ‘personal computer’ (the clue’s in the name) and, not long thereafter, started sprouting screens, mice and graphical user interfaces (GUI). Fun acronyms such as WIMP (windows, icons, menu, pointer) came along to make what was once an impenetrable technology more accessible—in more ways than one. The first Apple Macintosh, released with great fanfare in 1977, took computers into the mainstream.13
‘These technologies have of course become more user-friendly and accessible to the normal person, getting into people’s homes,’ Vines says. The ensuing decades were dominated by some variation of a WIMP computer, whether a desktop, a laptop or a relatively niche technology such as a Palm Pilot or personal digital assistant. ‘Then there was a shift to things away from the computer,’ says Vines.

The smartphone revolution didn't just make life more convenient. It fundamentally changed our relationship with technology by making it immediate and close to our bodies.
The world in the palm of your hand
It can be difficult, given the near-ubiquity of smartphones, to acknowledge what a revolution they were in terms of how we interacted with technology. For decades, we had been tethered to our technology, every action mediated through another device: second-hand contact. The smartphone made every action immediate and physical. We no longer needed a cumbersome keyboard, malfunctioning mouse or slippery stylus to open and close the windows on our computers: our fingers could do that. Better yet, we had apps. ‘It has been a massive shift,’ says Vines. ‘We’ve moved away from huge screens to engaging with work and play through a whole array of bespoke mobile applications.’
Look closely at the history of computing, and you see how much of a limiting factor the way we interact with technology has been to its reach and scale. Desktop computing was a huge advance, and opened up the market to people who otherwise would never have come near a mainframe terminal in a university computer lab. With that said, desktop computers were limited by their users’ willingness to learn how to navigate stacked menus and commit keyboard shortcuts to memory.
Blurring boundaries
‘When touchscreen devices came about, they opened up computing to a lot more people because more people could wrap their heads around it,’ says Joshua Walton, a mixed-reality designer and innovator who has worked on Microsoft HoloLens as the device’s principal designer. From babies who expect the picture on a television to change if they touch the screen, to silver surfers—or, internet users over 50—who complete crosswords on their tablets, the ease with which we’ve turned technology into a closer emulation of the way we interact with objects in real life is visible around us.14 Because the digital world increasingly resembles the physical one, many of us are becoming digitally native, whether we like it or not. It’s the future that computer scientist Mark Weiser predicted in 1991 when he said that tech ‘will be so ubiquitous that no one will notice [its] presence’.15
However, the smartphone market is maturing. In many parts of the world, there are more devices than people. (The Maldives leads the way globally, with nearly double the number of phones as people.16) And developments in smartphone technology are minor, not world-changing. When Apple announced a raft of new products in September 2018, more than a decade after the launch of the iPhone, for the first time in years the headlines weren’t about its new phones. Instead, all eyes were on the latest iteration of the Apple Watch: it was a signal of what’s to come. And we’re already starting to see hints of future technologies available in the palms of our hands: from augmented reality apps to digital assistants, we’re beginning to play around with innovations that may very well represent the next dominant technology paradigm. Still, we’re left with several questions: will these new technologies become the next dominant technology paradigms after all? Who will lead us there? And what will they make it look like?
Augmented Reality is already bleeding into wider cultural experiences. For example, music-focused publication Crack Magazine recently collaborated with legendary electronic artist Aphex Twin to create a series of posters which would come to life in AR throughout the streets of London.
3. The current state of tomorrow’s tech
The smartphone market saw its first year-on-year decline in sales at the end of 2017.17 ‘We have only been in the mobile era for a decade, and now something new is on the horizon,’ says Dennis Mortensen, founder of AI startup x.ai. Smartphones are set to lose their monopoly. As our attention shifts to devices on our wrists, in our ears and on our bedside tables, technology is becoming ever closer integrated with our everyday lives, recasting our engagement with it.
Although the previous wave of technological development centred on the smartphone, the next S-curve may not be so narrowly focused technologically. Whether through our ears, eyes or mouths, with our bodies or on our wrists, we’re no longer limited to interacting with computers through keyboards and mice. Sales of Apple’s other products—including Watches and AirPods—increased by more than a third in the third quarter of 2018 year-on-year.18 The upward trajectory in the sales of wearables—a blanket term for technologies that can be incorporated into clothing or as accessories—is replicated outside Apple, with IDC analysts seeing record sales every quarter.19
Computer interfaces are changing in a more meaningful way than at any point in our history. While there have been significant leaps forward before—from spools of tapes programmed to give computers commands, to command line interfaces, to graphic-rich operating systems and so on—nothing quite compares to the developments taking place now.
The way we engage with computing power is becoming altogether more interactive. From technology responsive to our gestures, like Leap Motion’s hand tracking software, to cameras which track our eye movements, like smile to pay systems, we’re a world away from how we used to interact with computers.20 Where we historically have been forced to speak the language of computers in order to interact, they are now starting to speak the language of people.

One day, information that’s currently embedded in our screens might bleed into physical reality—further blurring the line between the physical and digital worlds. Illustration by Marie Mohanna.
Our augmented future
In some ways, the future is already here. Millions of voice-activated devices, manufactured by the likes of Amazon, grace households across the globe.21 China was responsible for over half of global smart speaker growth in 2018;22 and a third of American consumers, and a quarter of British ones, have one of those voice assistant-driven devices, too.23 Indeed, many of us use voice agents every day, and a study carried out by Juniper Research found that the number of voice assistants in the world is expected to grow threefold by 2023—a jump to 8 billion from today’s 2.5.24
And that’s just voice technology: just imagine what happens when what we see on our screens and our social platforms becomes just as intuitive. ‘Today, to be able to access Instagram data, you have to have an account and a smartphone. But tomorrow, that information may not be just on your phone screen but embedded into your environment—and that really changes your relationship with your physical reality,’ says Yasaman Sheri, a Creative Director and Designer who has worked with the likes of Microsoft Hololens, Google (X), the Rhode Island School of Design and NASA’s Ames Research Center. ‘If we begin to augment our sense of perception like this, we change the nature of how we trust what’s in front us.’
Augmented intelligence describes these future interactions, bringing the power of algorithms, machine learning and data science to bear in a way that seamlessly interacts with our everyday lives. As we step away from the latest ‘AI winter’—broadly defined as a period during which interest and investment in AI plummeted—we’re starting to see computers that are trained in spatial and visual skills helping to define and recognise objects and people. Alongside that, we’re also seeing emotional and preferential intelligence systems that can observe our sentiments or choices, react to them and recommend actions based on our preferences. These technologies exist in what we already interact with—from the emergency braking system in cars to the algorithms that recommend songs, books and movies. But as their use becomes commonplace—and as more time is spent fine-tuning their skills—they’ll become even more of a staple of society, transforming themselves from eye-catching gimmicks into practical, meaningful aspects of our lives that we don’t think about.

In February 2019, the LEGO Group revealed LEGO® Hidden Side™, a LEGO play theme rooted in AR where children visualise a haunted world.
Predictably, many companies are exploring opportunities in this complex field. Much of this legwork, particularly in AI, is being done by what quantitative futurist Amy Webb has called ‘the big nine’: US giants Amazon, Google, Microsoft, Apple and Facebook, and Chinese insurgents Tencent, Baidu and Alibaba Group.25
Of course, a tech monopoly on defining the future of our digital and physical lives would pose risks in the long run. ‘Tech companies could own our new physicality and digitality—and each platform comes with its own politics,’ says Sheri. ‘So, which platform we choose to partake in could change our collective reality.’ Luckily, however, it seems that big tech isn’t alone in the sandbox of the future. Alongside them are a slew of non-tech companies—large and small, expected and unexpected—that are also developing tomorrow’s tech.
Many are devoting time and effort to ensure they aren’t left behind. Home improvement retailer Lowe’s have since 2013 been working with robotics, digital manufacturing and AR/VR visualisation.26 Toymaker LEGO have pioneered AR in-store experiences since 2009.27 South Korean car company Kia are gearing up to roll out a Real-time Emotion Adaptive Driving (R.E.A.D.) system—technology meant to reduce drivers’ stress levels by monitoring their emotions by analysing heart rate, electrodermal activity and facial expressions.28 These companies are literally and metaphorically miles from Silicon Valley.
Technological innovation is converging in several areas, which we outline next. Yet what’s most exciting about the future are the players involved. For the first time, potentially ever, more organisations from a variety of backgrounds are tackling the challenges and are together devoted to creating, shaping and improving the world of augmented intelligence.
Taiwanese AI expert and former Apple and Google executive Kai-Fu Lee presenting an optimistic view of AI's potential for the future
4. The computer interface—what’s coming next?
‘I feel there’s a really critical moment right now where human values and practices—people’s lived experiences—are going to have to be applicated for a lot more strongly, and those who create these technologies will have to be much more aware of their impact on individuals and societies,’ says Vines.
For example, we can expect to see digital layers mixed further into our physical environment. ‘We can have digital objects embedded into the world which might be visible by certain people, and we can have physical objects that are overlaid with metadata we can only access with certain hardware or software,’ says Sheri. ‘It changes our entire reality—and who gets to access to that is, to me, the biggest paradigm shift. If we begin to augment our sense of perception, and someone might have a sensory system to access that and someone else might not, it creates different realities for different people.’
In this early example of digital overlaying onto our tactile world, an artificial life ecosystem is accurately mapped and projected onto a landscape made of kinetic sand.
Perhaps the earliest example of a mass shift between how we perceive reality based on access to AR is the infamous Pokémon Go. Downloaded more than 800 million times since it was released, in July 2016, Pokémon Go swept the planet, with gamers obsessively chasing down their favourite characters.29 So immersive did the game feel that authorities had to warn people not to put themselves in danger while in pursuit of the rarest pocket monsters they could find in their lived environment.30 While the game is gimmicky, it introduced millions of people to an early form of augmented reality (AR): essentially, the overlaying of interactive, virtual elements onto real life. It augured a wider mobile AR revolution, one in which the camera feed is the interface.
With the September 2017 release of ARKit to 500 million of its devices, Apple spearheaded an entirely new level of industry confidence in the technology.31 As far as stamps of approval go, there’s not much like the CEO of one of the world’s most valuable companies saying that when AR is developed to its full potential, ‘we will wonder… how we ever lived without it.’32 Soon after Apple, Google followed suit with its release of ARCore for Android, completing mobile platform coverage and effectively ushering in an evolutionary phase for the technology.33

While Pokémon Go is gimmicky, it is one of the earliest examples of augmented reality being adopted en masse.
Now, over a year and a half later, the technology is gaining traction in various industries, from fashion and retail to entertainment and media. Several companies have invested time and money into AR platforms, and we can now enjoy a range of exciting and novel experiences. From trying on a new pair of glasses at home thanks to India’s Lenskart app, to seeing whether the new IKEA product range works with our existing furniture, to using our dining table as a battleground with The Machines’ AR gaming platform—it’s all possible, and most often quite easily from our phones. And those examples only refer to AR’s business potential: there’s a whole plethora of opportunity for creatives—such as developers and designers creating their own face filters for experimentation—to use the technology for imagination. ‘For me, AR is the most exciting when it becomes part of your toolbox and you can actually explore creativity through this new platform,’ says Sheri. ‘It’s not even about bringing the ideas you explore in AR into real life. It’s about the feeling you get from using AR as a tool that enables you to explore technology in a way that’s fundamentally creative.’
Still, when one has to peer through a five-inch-wide viewport to enjoy AR, there’s a limit to how immersive an experience it is. Indeed, the way we experience AR on mobile phones can be frustratingly clunky. It works—but it doesn’t necessarily work well. There’s simply too much friction.

The so-called ‘smart glasses’ market is already proving to be a highly competitive space, with the global market expected to reach $20 billion by 2022—a big leap from $340 million in 2017. Illustration — Marie Mohanna
To remedy this, Walton believes we need to ‘create smudges between the physical and digital’—in other words, take the tech away from the smartphone and more seamlessly integrate it into our lives. Just as we left the mouse and the keyboard peripherals behind when we migrated to the more intuitive touchscreen smartphone, so do we need to demolish the barriers between seeing and doing.
For some, that barrier was broken in 2016, with the release of HoloLens, Microsoft’s mixed-reality headset. (Recently, the company even released HoloLens2.) Because once technology moves so close to our senses, something interesting happens: we move from looking at the interface to stepping into it, and with that the mediating role of technology has the potential to all but disappear into the background.
HoloLens2 is part of the leading pack of devices that are more seamlessly integrating digital and physical aspects of our lives. The so-called ‘smart glasses’ market is already proving to be a highly competitive space, with the global market expected to reach $20 billion by 2022—a big leap from $340 million in 2017.34 While potential mass adoption is still some years out, AR glasses are pitted to be at the centre of a new era in human-computer relations. The change is already on its way: in August of 2018, one of the most-funded companies in history, Magic Leap, released what could be considered the first of the next generation of glasses.35
“For me, AR is the most exciting when it becomes part of your toolbox and you can actually explore creativity through this new platform.”
In terms of product launches, the first half of the next decade should set the scene for how our relationship with our devices will cement itself. Apple is working on AR glasses;36 and Facebook, AI computing company Nvidia and Google have all committed publicly to releasing glasses within the next few years.37 Meanwhile, according to Michael Abrash, chief scientist at Oculus VR, developments in VR are being fuelled by advances in AR—though, like others, he believes the two technologies are likely to converge in time.38
For now, the technology is quite immature. Because of its limited field of view, it has a viewport problem due to the limitations posed by the mobile phone’s hardware. But as more and more competitors enter the market, the performance of the underlying technology should only improve. Furthermore, the current technological kinks—inherent in everything from SLAM to occlusion—will be ironed out. The price is likely to come down, too. Today, a headset can easily set you back $2-3,000.

Today, an AR headset sets you back a few thousand dollars. In the near future, it might only be a few hundred—which may be the switch that'll make this technology accessible and desirable for many people.
Appealing aesthetics and cultural acceptance will be as crucial to the success of the emerging technology as smoothing out the technical lumps. Ground-breaking technology is more likely to wither on the vine if people view its users as ‘glassholes’ or worse (as Google Glass knows all too well).39 Although research around sentiment towards AR glasses is sparse, a 2015 study found that people are more concerned about how strange they’d look wearing these accessories than excited about their potential benefits.40 So before wearable technologies are even considered serious contenders to the throne held by the smartphone as our immediate go-to device for digital interaction, they will need to be socially accepted, we will need to feel comfortable wearing them—and choose to invite them into our spheres of intimacy.
All in the gestures
Our body language is fundamental to how we interact with other people. In fact, psychologists estimate that non-verbal communication comprises as much as 93 percent of how humans interact.41 Indeed, from a simple shrug to a languid eye-roll, what we don’t say is a major factor of interpersonal communication.
Once we step into the interface and leave behind both the five-inch viewport and touchscreen technology, we’ll need a new way of interacting with our augmented or virtual environment—one that relies on body language and gestures. In other words, as we increasingly interact with virtual representations of physical objects within a space, we will use our own bodies to do so.
Once we step into the interface and leave behind both the five-inch viewport and touchscreen technology, we’ll need a new way of interacting with our augmented or virtual environment—one that relies on body language and gestures. In other words, as we increasingly interact with virtual representations of physical objects within a space, we will use our own bodies to do so.
Companies such as Leap Motion and Nimble VR (recently acquired by Oculus) are developing technology that uses bodily interactions and gestures to make the connection between the physical and the digital feel more real. Microsoft has also recognised the importance of gesture tracking, and baked gestural interaction into its HoloLens operating system.42 Similarly, Magic Leap One supports gesture interaction, albeit currently somewhat rudimentarily.43
Despite being released almost two decades ago, the American film Minority Report is still touted as an aesthetic blueprint for how gestural interactions are developing today.
Yet again, however, these are early days. As the underlying enabling technology matures, we need to figure out what to do with it, and when. For example, when does it make sense to emulate physical properties and interactions such as grabbing and moving a virtual object as one would a physical one? When does it make sense to use the unique properties of a virtual environment to, say, pick up objects from afar simply by pinching them on screen?
While some companies are figuring out the answers, others are focusing on nimbler kinds of bodily interactions. Google’s project Soli, presented a couple of years ago, used radar technology to enable micro-gesture control of a watch. And the TrueDepth 3D sensor used in the iPhone X (and subsequent models) allows the phone to track and map its user’s facial movements.44 For now, micro-gestures and TrueDepth are used primarily in early demos or games (for example, allowing the gamer to power an emoji up the screen by raising her eyebrows), but more developed ideas should emerge.

Gestural interfaces are already making their way into fashion and performance. Here, dancers store their own gestural expressions in garments, where each piece of the material expresses the nerve fibers that record our body language in our cells. Photo — Ars Electronica
While this is all part of an evolution of interfaces away from keyboards towards more naturalistic movements, it remains uncertain as to whether gestures will grow into the dominant interaction technology, just as the keyboard was (and as voice was heralded to be in 2017). In all likelihood, they will end up co-existing, providing future creators and designers with a bigger box of interaction instruments to play with when creating new tools or experiences.
Alexa, cook me a potato
In September 2018, Amazon unveiled dozens of new products.45 They all have one thing in common: they can be controlled with your voice. The US tech giant is one of several major companies now exploring voice interaction. From Microsoft’s Cortana to Apple’s Siri to Baidu’s Little Fish, mediating commands through voice interfaces is still very much in vogue.
In recent years, the fidelity of the technology capturing voice has continued to mature, spearheaded by tech firms like Google and Baidu. We’re now close to the point of having artificial voices that are indistinguishable from real ones to the untrained ear. While this is significant for the immersive qualities of voice interaction technologies, it raises some interesting questions. When an artificial voice calls you, should it declare its identity? And what would it mean for a brand or company to have its own voice?

We’re now close to the point of having artificial voices that are indistinguishable from real ones to the untrained ear. So when an artificial voice calls you, should it declare its identity? And what would it mean for a brand or company to have its own voice communicated through a smart speaker?
Speaking is our dominant form of communication. Yet for as long as most of us have owned computers or smartphones, we’ve had to type out, click on or scroll through our requests or messages rather than say them aloud. In countries where the alphabet is easily rendered as twenty-odd characters on a keyboard, that’s perfectly feasible, if a bit cumbersome. But in countries like China, which has a pictographic language, visually representing characters makes traditional interfaces all but impossible. Indeed, Baidu’s TalkType keyboard app defaults to voice recognition. Moreover, according to one recent survey, more than three-quarters of smartphone owners in China use voice interaction.46
Moreover, there are certain situations in which voice interaction is simply safer—such as while driving, where taking one’s eyes off the road can be fateful—but this field is still being explored. For now, voice interaction will benefit situations where touchscreens and mobile phones can’t be displayed or need to be kept in the pocket.
However, while voice technology is certain to play a role in the years to come, it is clear that we are still figuring out just what it works great for, and what it doesn’t. It’s apparent that the temporal and invisible nature of audio enables certain interactions to occur naturally and others to proceed with more difficulty. For example, while back and forth questions and answers might be easy with a digital assistant, instructions for complex or lengthy tasks may not. For a picture does indeed tell a thousand words, in certain situations. There is further no equivalent of glancing. Just imagine having a voice read you a recipe, going through the ingredients. Reading them to you top-to-bottom is likely to send you on a mad dash around the kitchen, because it does not know where you keep things—compared to a written recipe, which enables you to easily refer to what you are looking for at your own pace.
The takeaway from all this? The developments we see today, including voice technology, are likely to form just one part of a multi-modal future—where voice, gesture, text and other interfaces all interact.

Despite what science fiction and popular culture envisioned, this isn't quite what robots ended up looking like, is it?
5. Augmented intelligence
‘Computers already see better, hear better, and are better at many other things,’ says Mo Gawdat, the former head of Google X, Google’s so-called moonshot laboratory. He predicts that AI will help machine intelligence surpass human intelligence by 2029. But the future Gawdat anticipates isn’t as dystopian as sometimes depicted by science fiction and the media. It’s a world in which computers enhance human existence rather than seek to wipe it out. A world in which algorithms and machine learning help humans make decisions quicker and with more might, thanks to the power of processing.
In short, it’s a world of augmented intelligence. Or, as Gawdat puts it: ‘If I were to ponder the reality of existence, if I wanted to think deeply about the overlap between physics and consciousness through quantum physics and there is a machine 16,000 times smarter than I am helping me find facts and organise the analysis, I may actually find an answer.’
More than that, it means a world in which artificial intelligence plays an even greater role assisting humans. A world in which emerging technologies empower our everyday lives by combining the smart support, suggestions and choices of AI with the intuitive presentation and interaction of AR and beyond—all with a dash of human. Indeed, from detecting objects and mapping spaces, to recognising images and predicting individual preferences, we’re fast approaching a world of augmented intelligence.
George Yang—founder and CEO of AI Pros—explains how human and machine intelligence can come together to create something better.
Spatial intelligence
A key ingredient in developing the next generation of mixed-reality technology is to enable computers to reason about the spatial aspects of the world and the way they work. At the simplest level, it can be used by robotic vacuum cleaners to prevent them from persistently running into objects and instead learn to move smoothly within a space. While humans do that instinctively, getting a computer to understand the nuances of our environment is harder than it looks.
First, a computer needs to locate the planes in an environment. Once it knows that, say, a floor is a plane or a table is a plane, it can accurately navigate a space. For overlaying virtual objects into the physical world, however, a higher degree of precision is required. The computer must not only recognise planes, but also gain an understanding of the fixed physical elements in a space and how they will interact with virtual additions. Only then can it also gain an accurate understanding of the depth of those objects—something which is crucial for placing virtual elements into your space realistically and in a way that enables interaction. All of this is called spatial intelligence, and it is one of the building blocks of augmented reality. AR involves using simultaneous localisation and mapping (SLAM) to place and track virtual objects in relation to physical features, giving the impression that the virtual object resides in a physical space.
A range of different startups and big players are already working on bringing spatial intelligence to market, including Reality Editor, Abound Labs, Modsy, 6d.ai, Selerio, Microsoft and Magic Leap. These companies are helping computers chart the world and develop the tools to navigate it. The following are just a few of the developments and techniques they’re working on that fall into the category of spatial intelligence. The utmost goal for all of them, however, is to allow for further interaction between physical and virtual objects—in particular, permitting virtual objects to move behind physical objects and effectively be occluded, as well as allowing virtual objects to collide with physical ones.
- Diminished reality, or the ability to remove physical objects from view.
- Mediated reality, or the ability to remove physical objects from view as well as move them around in virtual space. Just how convincing both diminished and mediated reality are depends on the system’s ability to guess what to put in place of the removed object.
- Object detection and pose estimation, a field which involves identifying objects (detection) and then figuring out how they’re placed in relation to the other elements in a 3D space (estimation). Pose estimation in particular is crucial for accurately overlaying virtual layers on top of physical objects, thereby augmenting them.
- Emerging technologies that enable the building of 3D models of entire areas. A good example is 6d—a startup creating AR with 3D mesh capabilities.

Our recent SPACE10 Playful Research project, Spaces on Wheels, takes advantage of object detection and pose estimation to place self-driving cars in AR accurately into your environment. Rendering — foam studio
Visual intelligence
As cameras become an intrinsic part of our everyday lives, their use is becoming more commonplace, with AI trained to analyse moving images and entirely new products and opportunities emerging from them. For example, we can already use our phones to visually search for products in apps like IKEA Place simply by pointing our cameras at items we like and waiting for the app to find the most ‘matching’ products from the IKEA catalogue. In the near future, the visual intelligence software in our phones will be able to recognise items and put them in our virtual shopping bags (with the actual items being compiled for collection or delivery later).
This isn’t especially far-fetched. Consider the Amazon Go store, in Seattle, which allows shoppers to take what they want and leave without paying. A network of 100 cameras equipped with image-recognition software tracks each shopper, connecting them with their phone and Amazon account, and charging them for every item they put in their basket. There are three Amazon Go stores in Seattle and one in Chicago, and the tech giant plans to open 3,000 more by 2021. Similarly, the Chinese chain BingoBox has 400 ‘smart supermarkets’: unstaffed grocery stores that enable you to enter via QR code or WeChat account, scan items you want to buy and pay for your goods with AliPay or WeChat Pay.47

AI in apps like IKEA Place enables you to visually search for items you like and wait for the app to find the most matching products.

A speculative proposal for BingoBox storefronts, created by Zhou Pu Fang.
The technology underpinning both the Amazon Go and BingoBox stores is highly sophisticated, and—as with all the innovations outlined in this report—the team developing it isn’t working in a vacuum. Many major corporations and leading university computing labs are tackling the challenge of visual intelligence. For example, Microsoft Cognitive Services powers Picdescbot, a Twitter bot that describes pictures (with varying degrees of success). And Google Lens, which was launched in June of 2018, tries to classify any object it sees through the camera lens of a smartphone.
Having successfully classified the objects you are pointing the lens at, your phone’s assistive technology—or, in other words, software which makes your daily life easier—may one day be able to interpret and analyse those images and determine your individual taste, style or preferences. Indeed, visual intelligence is only half the story. When it comes to providing personal experience, assistive intelligence counts too.
Preferential intelligence
Although preferential intelligence isn’t an official term, it makes sense when you consider how your Amazon, Gaana or Tencent Video account works: all three are examples of software that can recognise and analyse your individual likes and dislikes, account for your taste, and recommend new content or products.
Indeed, so-called recommender algorithms drive some of today’s most popular services. Whenever we watch Netflix, we help its preferential intelligence algorithm determine what to recommend next. When we buy a product on eBay, it gives the company a broader picture of what we spend our money on and affects which products we see the next time we browse the internet. And these days, your Instagram feed shows you your friends’ latest photos and smoothly integrated ads for products you actually like—on almost a 50/50 basis.
Preferential intelligence affects every element of our digitally connected lives, and will only continue to do so onwards and into the future. Already on a daily basis, for example, our Google results are tailored to what the search engine believes to be our interests, trapping us in filter bubbles. Simply put, the content we produce provides crucial data points for recommender algorithms. And in a way, we have become too beholden to algorithms getting to know us to unpick our reliance on them. According to one recent study, ‘intelligent agents’ that consider our preferences and recommend products they believe we will like were expected to influence 10 percent of our purchase decisions in 2018.48 And, as their intelligence increases, that figure will doubtlessly rise.
In some ways, this may not be such a bad thing: as access to information via digital platforms continues to increase and feel almost overwhelming, we might appreciate software that helps us sift through the noise and quickly find what we’re looking for. At the same time, however, we need to consider just how deeply we want to let these preferential systems into our psyche—and take steps to ensure our privacy and independence doesn’t falter as preferential software gets smarter.
Emotional intelligence
Humans have learned to read between the lines and intuit emotions over the course of hundreds of thousands of years. We are emotional beings, and our attitudes can change in a second. Considering how humans perceive emotion, teaching machines how to understand and parse soft skills like we do can be difficult.
The aim in teaching computers to understand our emotions and reflect them back to us when they interact is to improve human-computer interactions. Many universities, such as MIT, are investigating ‘affective computing’: teaching AI how to monitor facial expressions, voices and writing, interpret feelings and reply sympathetically.49 ‘If you want robots with social intelligence, you have to make them intelligently and naturally respond to our moods and emotions, more like humans,’ says Oggi Rudovic, who researches affective computing at MIT.

Sentiment analysis can help save lives. Danish startup Corti's AI can diagnose cardiac arrests more accurately than human call-handlers.
So-called sentiment analysis can help provide better advice and recommendations: startups such as Aylien use natural-language processing to parse what someone is seeking to communicate in writing, and craft their response. Startups like Trackur, Revuze and Sentigeek are doing similar work. But sentiment analysis can also help save lives. Software developed by Danish startup Corti analyses the voices of people calling emergency services as well as the words they use. It can diagnose cardiac arrests more quickly and more accurately than human call-handlers, and direct them to better triage the patients most in need. In a medical emergency, where every second counts, Corti’s emotional intelligence is literally saving lives.
Caregiving is an increasingly resource-intensive area, especially as populations age and more people are diagnosed with neurodegenerative diseases. Teams in South Korea, Japan and Ireland are developing emotionally intelligent robots that can support patients diagnosed with Alzheimer’s disease by reminding them to take medication or by perceiving their emotional state and reporting the information to human caregivers.50 For example, South Korea’s Silbot3 monitors patients and reminds them to take their medication; other companies are developing interactive robots that can foster an emotional connection with elderly or lonely people. Early tests have proved promising, and could help transform the future of healthcare.
6. Brand movements
Universities and startups aren’t alone in having a significant stake in augmented intelligence. Some of the biggest names in retail, fashion, design and marketing do too. Consider the growth of ‘technology as fashion’ and the growing popularity of devices and accessories on the catwalk.51 Look, too, at the focus on design and the way that many tech and fashion firms are joining forces to make once-geeky products more aesthetically pleasing.52 After all, as technology is ever more intertwined with everyday life, tech-enabled objects will increasingly sit alongside everyday objects. Ugly devices are out; well designed and seemingly handcrafted gadgets are in.
What’s also interesting about the zeitgeist is that brands are adopting emerging technologies ever earlier, even if they are not classically thought of as tech companies. In other words, brands in fields like fashion and culture are actively participating in an arms race to use cutting-edge technology such as AR. And, in so doing, they are of course helping to improve the tech and introduce it to the mainstream.
Focus on design
Tech is trendy now. Just look at the queues snaking out of Apple’s stores ahead of the company’s product launches. Many of us pre-order and anticipate the latest smartphone or smartwatch—and then flaunt it with pride. Meanwhile, YouTube stars, influencers and other early adopters help fuel word-of-mouth reviews and propel a movement that is steadily making devices and accessories as much a part of our daily lives as wearing shoes and socks.

As technology becomes ever more intertwined with everyday life, tech-enabled objects—such as Apple’s AirPods—will increasingly sit alongside everyday objects. As a consequence, they need to hold their own aesthetically.
But to make products more desirable, they must be attractive—beautiful, even. Which is why we are seeing increasing focus on design. As technology becomes ever more intertwined with everyday life, tech-enabled objects—such as Apple’s AirPods—will increasingly sit alongside everyday objects and, as a consequence, they need to hold their own aesthetically. Similarly, technology is getting ever closer to our body. From the Apple Watch to Snapchat Glasses to HoloLens, these devices are no longer judged solely on their purpose and utility, but also on how they look.
If we’re to fully embrace this tech-enabled, tech-enhanced future, it’s important that the devices we choose to wear are well designed, and that we feel confident and socially accepted when wearing them. By incorporating smart, sleek design, it’s possible to improve the perception of wearable devices and increase their adoption. That appears to be the modus operandum of ‘smart glasses’ maker North, which wants to build ‘a future where technology is human-centric, discreetly built into fundamental parts of our lives that already exist’.
Meanwhile, many companies are rethinking their product-development process to more seamlessly incorporate design at an earlier stage. Others are embracing the ‘handcrafted’ aesthetic in an attempt to make their devices seem natural. For example, according to Ivy Ross, Google’s vice president of hardware design, the company’s Home Mini speaker borrows liberally from the appearance of pebbles.53 It’s ironic, perhaps, that as technology advances and imperfections are ironed out, designers are aiming to make it more natural and everyday.
Of course, the likes of Apple have long fine-tuned their products to make them more attractive. The transition from EarPods to sleek, cable-free AirPods illustrates how form is considered in tandem with function in the field of accessory design. Apple’s hiring of renowned industrial designer Marc Newson, in 2014, to work alongside design chief Jony Ive further demonstrates its instinctive grasp that technology must be well designed and stylish—if not fashionable.54

Brands are embracing the handcrafted aesthetic to make tech seem more natural. For example, Google's Home Mini speaker borrows from the appearance of pebbles.
And it’s not just about tech brands becoming more aesthetically conscious, either: the relationship works the other way around. As tech has become trendy, it has caught the eye of fashion houses, designers and high-street clothing brands. Renowned brands like Fossil and Hermès are designing smartphones, wearables through standalone devices or tie-ins with tech firms.55 Denim brand Levi Strauss even partnered with Google to create a so-called ‘smart’ jacket: its cuff not only vibrates when you receive a phone call, but can also give you directions and play music.56 What this further demonstrates is that technology no longer serves a single purpose—a way of making a phone call, say, or of telling the time. Rather, it has social value.
Bespoke experiences
‘Personalisation is what’s happening,’ explains Lara Marrero, strategy director and retail practice leader at design firm Gensler. ‘Companies are realising that to meet the needs of their customers, they need to use the data that customers have been giving them for a really long time.’
Indeed: as well as using emerging technology to develop new products, brands are harnessing it to give people opportunities for new levels of engagement. Perhaps most commonly, some are using AI to personalise content and tailor communication and marketing to various audiences. Netflix, for example, emails suggestions of what to watch next, based on your recent viewing history, while other companies use targeted marketing to follow you around social media. After all: when you understand how people spend their time with your brand, you can figure out what content they will want to see more of and how to best capture their attention as a brand.
To do this, all of these companies are using the power of big data, collected and sifted through at great speed thanks to the immense computing power of artificial intelligence. This helps companies recognise trends as they break and capitalise on them with their marketing messages. Because by leveraging existing and emerging technologies—plus the wealth of data that each of us provides on an ongoing basis—it’s now possible not just to hone in on a brand’s audience, but also to identify individuals and personalise the experience for them. This can be done through both content (using AI) and form, by creating hyper-personalised, new experiences for shoppers to enjoy—whether in a physical or digital space, or a combination of the two.

American department store giant Macy’s piloted the use of VR to display more furniture in less space—and saw big returns on investment.
The American department store giant Macy’s, for example, is testing product displays that use AR and VR.57 It piloted the use of VR to display more furniture in less space, and ‘found that the tech ‘significantly’ increased transaction size and reduced returns’. Macy’s is also piloting a mobile checkout app that will allow customers to scan items using their smartphone’s camera.
‘I see the future of digital as how digital can be in service to the physical experience,’ says Marrero. Moreover, how can brands use all the data they have about people to generate meaningful insights about how they shop—and how can they use those insights to make the digitally-enabled physical store a better customer experience?
Early adopters
Never has the maxim ‘every company is a tech company’ been more relevant. Companies recognise the need to innovate—and be seen innovating—and are seeking to adopt technology in earlier, more significant ways than ever. In so doing, they’re breaking with decades of thinking that businesses should build slowly, cautiously and in silos. But there are even bigger changes afoot: the goal is no longer a perfectly preened product.
Take LEGO, for example. In 2009, the Danish toy company installed an AR display in one of its stores.58 Neither the technology nor the interface was especially sophisticated: shoppers could hold up a LEGO box to a camera, and a screen would show them a computer rendering of the pieces inside the box. Though it was temperamental at the time, it was nevertheless a flag in the ground that helped further position LEGO as a brand for innovators and creators that provides many budding engineers with their first taste of invention. The company’s AR device showed its willingness to embrace the future.

Technology like Domino's pizza-ordering chatbot may be imperfect—but business must be willing to embrace imperfection to be at the forefront of innovation. Photo — thespoon.tech
Of course, this kind of thinking isn’t innovative to the tech world. Google kept some of its best-known products, like Gmail, in public beta for years while allowing the public to use it as a fully-fledged utility. The effect was to habituate people to brands testing their products in public. And with brands facing a rising tide of ‘authentic’ influencers—and as the messaging around marketing has changed as companies accept honest flaws—businesses are proving more willing to embrace imperfection. Indeed, much of the technology—whether chatbots to help order pizza (as used by Domino’s) or conversational AI to provide purchasing advice (which beauty brand HelloAva uses to discern customer skin type)—is in the early stages of development. Though often riddled with bugs that need fixing, the technology is nonetheless being used with actual people already.
But why would businesses take what not so long ago would have been considered a big risk? Essentially, because emerging tech has turned into an arms race and brands want to position themselves as early adopters of innovation. For one thing, they value being at the cutting edge of innovation because it improves their reputation. Being tech-focused—even if many of the products aren’t finished—is often seen as a positive trait.
However, planting your flag earlier doesn’t merely have a reputational benefit: it also protects you from being left behind. Brands that wait for the technology to mature may find themselves lagging behind rivals in terms of customer perception. They could also find it difficult to integrate new technologies: if a competitor establishes a platform that becomes the norm and implements best practices, onboarding people familiar with a rival platform becomes more difficult than teaching new audiences an emerging technology. Likewise, given that the development of technology is outpacing the speed of implementation, brands that decide to adopt technology only when it has reached maturity may discover their services non-differentiating or obsolete by the time they reach their target audiences.
In the innovation arms race, data is invaluable. But, as Tricia Wang explains, brands also need to consider the real people behind the numbers and statistics—or else risk missing the mark with their audience.
Interactive environments
Many of the developments discussed above concern technology that can be used close to our bodies. In some cases, though, the technology isn’t worn—it’s in our surroundings. With vision and sensory technology allowing us to track and analyse real-world movements, and haptic technology providing physical force feedback to render virtual interactions more tactile, anything from our car to our home to shops to individual items of furniture are potential interfaces.
The retail environment is well suited to this transformation. The emergence of the shop-as-interactive-environment is happening in China, where a store developed by Alibaba and Guess allows customers to shop seamlessly. And as we’ve touched upon, Amazon’s Go technology enables shops to become interfaces—spaces that customers can log into with their smartphones (and perhaps, one day, with their bodies), take what they need, and log out when they leave.

What happens when technology isn't even worn anymore, but is in our surroundings? The Amazon Go store serves as a blueprint.
Then there’s Farfetch’s Store of the Future, in London. Described as ‘an operating system for a shop’ it boasts features such as ‘a universal login that recognises a customer as she checks into the store; an RFID-enabled clothing rack that detects which products she is browsing and auto-populates her wishlist; [and] a digital mirror that allows her to view her wishlist and summon items in different sizes and colours’.59
The role of holographic technology is growing, too. It can already bring pop stars back to life to perform their greatest hits (or, like Japanese singing sensation, Hatsune Miku, invent entirely new ones). Now the likes of Looking Glass Factory are trying to create interactive worlds using holograms and projection mapping. Imagine walking into a virtual store with holographic projections of clothing for sale. As you do so, built-in sensors track your hands as you run them along a shirt, with haptic technology providing the feel of the material against your skin.
Finally, there’s the potential impact on the future of the car. Just as retail brands are rethinking what a shop looks like and can do, so are automobile manufacturers likely to embrace the idea of the car as an interactive environment. As vehicles become ever more autonomous—a development explored in SPACE10’s 2018 Playful Research project and report ‘Spaces on Wheels’—the car-as-interface will almost certainly grow in popularity, with vehicles evolving to include interactive services to help fill the time of hands-free ‘drivers’ and passengers.

Once car manufacturers finally embrace the car as an interactive environment, what will it look like? We offer a suggestion in our Playful Research project, Spaces on Wheels. Rendering — foam studio
7. Where are we going?
The destination is a more layered, digitally literate world, in which human intelligence is augmented by machine logic and representation. Of course, some pursuits will lead to dead ends. Others will seem risky, but could become the primary path to progress. In recent years, screens have shrunk and the barriers between humans and computers have dissolved, with interfaces becoming more intuitive and natural. If we continue in the same direction, witnessing the demise of the device and the rise of the accessory, we will discover a world in which screens and interfaces are both increasingly closer to our bodies (thanks to wearables and other interfaces) and all around us (thanks to increasingly immersive and augmented environments).

The destination is a more layered, digitally literate world—in which human intelligence is augmented by machine logic. Illustration — Marie Mohanna
For one thing, there’s a good chance that we’ll eventually stop carrying around our phones. Instead we will have screens embedded in our glasses or, even farther out, contact lenses—and one day we may even have brain interfaces.60 What we’ll see—whether wearing glasses or not—will be an environment that melds the digital and the physical in a way that scarcely seems possible now.
As the processing power of computers increases, objects will be more clearly defined, their location more accurately mapped. Physical objects will increasingly be imbued with digital properties, blurring the relationship between form and function. The widespread adoption of 5G will help us stream data at such high rates that the need for local computational power suddenly drops—with profound consequences for machine-learning services and the local hardware that runs it. The merging of the digital and the physical is also likely to have a huge influence on industrial and fashion designers eager to bring what once seemed like science fiction into reality.
In a sense, of course, tomorrow’s world is emerging before our very eyes. We increasingly rely on artificial intelligence in everyday aspects of our lives, and AI-driven services will continue to impact society and the way we use computers. Just as we take for granted that we have the entirety of human knowledge at our fingertips, thanks to the world wide web, we will soon begin to evolve with the technologies we create—losing grasp of some seemingly obvious skills (like remembering phone numbers by heart) while gaining new ones that serve us in this digitally-tinted world.

As technology progresses, we'll stop carrying around our phones—and perhaps one day even have brain interfaces.
In a sense, of course, tomorrow’s world is emerging before our very eyes. We increasingly rely on artificial intelligence in everyday aspects of our lives, and AI-driven services will continue to impact society and the way we use computers. Just as we take for granted that we have the entirety of human knowledge at our fingertips, thanks to the world wide web, we will soon begin to evolve with the technologies we create—losing grasp of some seemingly obvious skills (like remembering phone numbers by heart) while gaining new ones that serve us in this digitally-tinted world.
In short, we will be a society with augmented intelligence. We will be able to navigate even more information through these services at the same speed as we would with a lesser amount of information, acting quicker and with less friction. Technological speed will increase in lockstep with the amount of information we call upon. We won’t even consider this unique. The term ‘AI driven’ will disappear when everything relies on artificial intelligence.
But there are questions and concerns to be addressed. Augmented intelligence relies on artificial intelligence developed and trained by humans. It is, as Mo Gawdat explained, a technological toddler, ready to take in any information we provide it. As such, it reflects our biases and prejudices—both good and bad.
Will the way we design the interweaving between AI and our lives come from Silicon Valley and other tech giants who operate under a fundamentally Western perspective? As long as we start thinking about these issues today, there is hope to avoid such sticky problems.
Finally, we need to seriously think about how augmentation changes our behaviour—for better and for worse. For at its core, wearing AR glasses or only being fed information preferential intelligence systems deem relevant to us distorts a crucial common reference point we’ve had up to this point: our collective reality. Once that happens, the political questions inevitably come in. We’ll need to consider who exactly will gain the ability to access these new developments in technology—and who will be left out of the social shift. Will intuitive AR be limited to the privileged few who have the means to pay for it? Will the way we design the interweaving between AI and our lives come from Silicon Valley and other tech giants who operate under a fundamentally Western perspective? And will augmenting the human body further exacerbate the separations between the have and have-nots and the global north and south?
As long as we start thinking about these sticky issues today, the future is full of promise. And as we reach the end of the smartphone era, and anticipate what comes next, it is with hope and promise—and unlimited potential.
-
“Apple Reinvents the Phone with IPhone.” Apple Newsroom. January 9, 2007. https://apple.co/2BaVt8H.
-
“Smartphone penetration to reach 66% in 2018”. Zenith Media. October 16, 2017. https://bit.ly/2geb8aX.
-
“Most Important Inventions of the 19th Century: In Pictures.” The Telegraph. May 23, 2017. https://bit.ly/2VDZTuM
-
Swaine, Michael R., and Paul A. Freiberger. “ENIAC.” Encyclopædia Britannica. October 26, 2018. https://bit.ly/2mjm4aH.
-
“Percentage of households with home computers in the United Kingdom (UK) from 1985 to 2017”. Statista. https://bit.ly/2RWBX5d.
-
“Percentage of households with a computer at home in the United States from 1984 to 2010”. Statista. https://bit.ly/2HDl8Ig.
-
“Share of households with a computer at home worldwide from 2005 to 2018”. Statista. https://bit.ly/2Js46QA.
-
“Number of smartphone users worldwide from 2014 to 2020 (in billions)”. Statista. https://bit.ly/2dk8wHh.
-
Evans, Benedict. “Presentation: Ten Year Futures.” Benedict Evans. December 06, 2017. https://bit.ly/2j4kviB.
-
Schwieterman, Joseph, and Lauren Fischer. “The S-Curve of Technological Adoption: Mobile Communication Devices on Commuter Trains in the Chicago Region, 2010–2015.” Journal of Public Transportation 20, no. 2 (2017): 1-18. https://bit.ly/2HsUOSi.
-
Engelbart, D. C. “Augmenting Human Intellect: A Conceptual Framework.” Doug Engelbart Institute, 1962. http://dougengelbart.org/content/view/138
-
“How the Computer Changed the Office Forever.” BBC News. August 01, 2013. https://bbc.in/2EmWCKG.
-
“The Personal Computer.” History Learning Site. March 17, 2015. https://bit.ly/30uR5ex.
-
Curtis, Sophie. “Rise of the Silver Surfers.” The Telegraph. March 08, 2014. https://bit.ly/2Jvdymj.
-
Weiser, Mark. “The Computer for the 21st Century”. Scientific American. September 1991. https://bit.ly/2Fm8WK8.
-
Communications Authority of Maldives. Accessed May 15, 2019. https://bit.ly/30BL67u.
-
“Gartner Says Worldwide Sales of Smartphones Recorded First Ever Decline During the Fourth Quarter of 2017”. Gartner press release. February 22, 2018. https://gtnr.it/2yyZQYn.
-
Helmore, Edward. “Apple reports new sales record for third quarter as it eases toward $1tn mark”. The Guardian. 31 July 2018. https://bit.ly/2w9DeMF.
-
“Wearable Device Sales Will Grow 26 Percent Worldwide in 2019, Says Research Company Gartner.” Wearable Technologies. December 06, 2018. https://bit.ly/2WVcqLN.
-
Burt, Chris. “”Smile-to-Pay” Facial Recognition System Now at 300 Locations in China.” Biometric Update. November 17, 2018. https://bit.ly/2OYNfVL.
-
Kaplan, David. “Global Smart Speaker Sales Hit 11.7 Million Earlier This Year — But What About Voice Activation Usage?”. GeoMarketing.com. September 20, 2018. https://bit.ly/30yPWCH.
-
Kinsella, Bret. “China Is Driving Half of Global Smart Speaker Growth.” Voicebot. August 16, 2018. https://bit.ly/2HsBMLU.
-
Martin, Chuck. “Smart Speaker Ownership Hits 19% Globally, 35% In U.S.”. MediaPost.com. June 26, 2018. https://bit.ly/2VAEEu1.
-
“Digital Voice Assistants in Use to Triple to 8 Billion by 2023, Driven by Smart Home Devices.” Juniper Research. February 12, 2018. https://bit.ly/2YHYJAf.
-
“2019 Trend Report for Journalism, Media & Technology”. Future Today Institute. https://bit.ly/2F6QlWm.
-
“CES Spotlights AR & Robotics.” Lowe’s Innovation Labs. January 06, 2017. https://bit.ly/2LVuyEd.
-
Marra, Greg. “LEGO Store Augmented Reality”. YouTube. March 18, 2010. https://bit.ly/1H5iGnd.
-
“Stressless Tech.” JWT Intelligence. March 29, 2019. https://bit.ly/2OwQrYo.
-
Lanier, Liz. “‘Pokemon Go’ Reaches 800 Million Downloads”. Variety. May 30, 2018. https://bit.ly/2Js59A0.
-
Associated Press. “Officials Warn of Real-Life Dangers of ‘Pokemon Go’.” NBC New York. July 13, 2016. https://bit.ly/2QgC2QJ.
-
Roettgers, Janko. “The Biggest New Thing Apple Is About to Release Isn’t Hardware at All.” Variety. September 12, 2017. https://bit.ly/2Huj1aO.
-
Leswing, Kif. “Apple CEO Tim Cook thinks this new tech will be as important as ‘eating three meals a day’”. Business Insider. October 3, 2016. https://bit.ly/2dE8Nbp.
-
“ARCore Overview | ARCore | Google Developers.” Google. February 28, 2019. https://bit.ly/2iSzJGI.
-
“Smart Glasses for Augmented Reality Technologies: Global Markets to 2022”. ReportLinker. February 2018. https://bit.ly/2VYGsCc.
-
Haselton, Todd. “After Almost a Decade and Billions in outside Investment, Magic Leap’s First Product Is Finally on Sale for $2,295. Here’s What It’s Like.” CNBC. August 8, 2018. https://cnb.cx/2M1Uhuk.
-
Porter, Jon. “Report Claims That Apple Could Begin Production of IPhone-powered AR Glasses This Year.” The Verge. March 08, 2019. https://bit.ly/2TEqx9X.
-
Price, Rob. “Facebook Is Restructuring Its Augmented-reality Glasses Division as It Inches Closer to Launch.” Business Insider. January 21, 2019. https://bit.ly/2wa3gzd.
-
Feltham, Jamie. “5 Big Takeaways From Michael Abrash’s Oculus Connect 5 Keynote.” UploadVR. October 02, 2018. https://bit.ly/2HsZEPs.
-
Schuster, Dana, and Dana Schuster. “The Revolt against Google ‘Glassholes’.” New York Post. October 20, 2014. https://nyp.st/2JSDUhv.
-
Rauschnabel, Prof., comp. Smart Glasses Survey 2016. Issue brief. University of Michigan—Dearborn. March 01, 2016. https://bit.ly/2YyUjMe.
-
Mehrabian, Albert. Silent Messages: Implicit Communication of Emotions and Attitudes. 1981.
-
Rwinj. “Gestures – Mixed Reality.” Mixed Reality | Microsoft Docs. February 24, 2019. https://bit.ly/2HI9ID5.
-
Critic, — AR. “Magic Leap One Hand Tracking System – How Advanced Is It?” AR Critic. August 17, 2018. https://bit.ly/2VH7Sau.
-
Cardinal, David. “How Apple’s IPhone X TrueDepth Camera Works.” ExtremeTech. September 14, 2017. https://bit.ly/2lOYh4T.
-
Matney, Lucas. “The Long List of New Alexa Devices Amazon Announced at Its Hardware Event – TechCrunch.” TechCrunch. September 20, 2018. https://tcrn.ch/2xr9Nqm.
-
“iProspect investigates voice adoption and usage in ‘The Future is Voice Activated’ research piece”. Campaign Brief. 21 August 2018. https://bit.ly/2WgsIls.
-
“Video: We Tested One of China’s Unmanned Stores and This Is What We Found · TechNode.” TechNode. July 24, 2018. https://bit.ly/2HvjCcp.
-
“Predictions 2018: The empowered machine”. Forrester. https://go.forrester.com/research/predictions/.
-
Matheson, Rob. “Personalized machine-learning models capture subtle variations in facial expressions to better gauge how we feel”. MIT News. July 26, 2018. https://bit.ly/2mLdqlj.
-
Thompson, Dennis. “Robots may soon join ranks of Alzheimer’s caregivers”. UPI. June 28, 2018. https://bit.ly/2w9KFUd.
-
“An AI Machine Directed the Rag and Bone’s “A Last Supper” Catwalk Show.” It’s Nice That. February 15, 2019. https://bit.ly/2HuravU.
-
Fisher, Lauren Alexis. “Louis Vuitton Launches A $3,000 Smartwatch.” Harper’s BAZAAR. November 05, 2018. https://bit.ly/2X5aIra.
-
“Big Tech’s handmade aesthetic”. J. Walter Thompson Intelligence. The Innovation Group report. July 30, 2018. https://bit.ly/2Eo3QOv.
-
Lowensohn, Josh.“Apple hires legendary designer Marc Newson to work under Jony Ive”. The Verge. September 5, 2014. https://bit.ly/2Whif9h.
-
Marchese, Kieron. “Givenchy VR goggles: Imagine fashion’s future foray into augmented reality”. Designboom. November 24, 2017. https://bit.ly/2AsrCse.
-
Lee, Elizabeth. “Technology is Reshaping Fashion Industry”. VOA News. 29 April 2018. https://bit.ly/2jCeZAl.
-
Williams, Robert .“Macy’s expands mobile checkout, AR/VR furniture departments”. Retail Dive. May 17, 2018. https://bit.ly/2rVLoWx.
-
“Augmented reality in Lego stores”. Retail Innovation. August 4, 2013. https://bit.ly/2QkcMJC.
-
Kansara, Vikram Alexei. “Inside Farfetch’s Store of the Future”. Business of Fashion. April 12, 2017. https://bit.ly/2otIpEc.
-
Tangermann, Victor. “Zuckerberg: Facebook Is Building a Machine to Read Your Thoughts.” Futurism. March 07, 2019. https://bit.ly/2TwGsYy.