A New Paradigm in User Interface Emerges

The Desktop User Interface

Most of us know how to use a desktop computer – this interface has been battle tested for about 45 years now. However, few of us know that the design of this computer desktop interface was invented long before Microsoft and Apple made the computer desktop commercially successful. Xerox PARC introduced the Alto in 1973, which set forth the graphical user interface (GUI) that Apple and Microsoft would largely emulate and commercialize a decade later.

The Smartphone User Interface

The next major user interface shift – that so many of us have become accustom to using – is the smartphone. This leverages a smaller touchscreen on a hand-held device and is connected to the internet. However, as with the Alto, the smartphone interface was first introduced by IBM as Simon back in 1992 at COMDEX and made successful by others much later. Certainly ahead of its time, the Simon set forth the first internet-connected phone/handset that included a touch screen and the integration of e-mail, maps, stocks, and news. This user interface was eventually leveraged and widely commercialized by Apple and other smart phone providers 20 years later.

The Smart Speaker User Interface

The third major paradigm shift in user interface is upon us and began with the invention of the a voice interface coupled to an interactive digital assistant. Voice interfaces have been around for some time, but the integration of a voice-driven interface to a digital assistant was first introduced by Apple in 2011 with the iPhone 4s and the digital assistant named Siri. Unlike the first two inventions above, this voice interface was a commercial success and the last product that Steve Jobs introduced from Apple. Also, unlike the earlier two user interface paradigms that were largely emulated widely into commercial adoption, Amazon took the idea of a voice interface coupled with a digital assistant and made it hands free. Amazon introduced the Echo in 2014 and paved the way for the hands-free, voice-driven user interface. The Echo has been widely successful and we’re just seeing the beginning of products that will leverage a hands-free and far-field voice interface for a wide variety of applications.

Next Generation Voice User Interface

Each paradigm shift in computer user interface spawns a new era of digital products. The success of Amazon Echo has given way to a land rush of smart speakers that integrate hands-free voice to a digital assistant. The most notable other offerings are the Google Home and Apple Home Pod. While all of these continue to improve on the original Echo smart speaker design, I classify them as “Generation 1” voice interface products that are more opportunistic than they are useful. Opportunistic in that they leverage your voice data to help improve their own technology and mining of voice data to monetize with advertisers. There will be a wave of next-generation voice-driven products that will be exciting to see.

At Aaware, we’re embracing the voice-driven user interface paradigm shift by making far-field voice interfaces work more robustly and with more acoustic intelligence. We have implemented our technology onto the krtkl snickerdoodle boards as they offer incredible performance and flexibility at a very reasonable price. At Aaware we leverage the Zynq programmable SoC that powers the snickerdoodle boards to accelerate our more computationally demanding portions of the algorithm, providing an unparalleled performance/price ratio.

While the new voice-driven interface is being embraced and will spawn a new era in digital products across so many market segments, it also has challenges to wider adoption which I will write about in a follow on blog.

You can access the two snickerdoodle-based Aaware voice development platforms – AEV13SD10 and AEV13SD20 – and begin developing your voice-driven product here.

By |2018-07-29T13:22:07+00:0030 June 2018|0 Comments

Leave a Comment

  Subscribe  
Notify of