RankBrain uses artificial intelligence to embed vast amounts of written language into mathematical entities — called vectors — that the computer can understand. If RankBrain sees a word or phrase it isn’t familiar with, the machine can make a guess as to what words or phrases might have a similar meaning and filter the result accordingly, making it more effective at handling never-before-seen search queries. […]
The system helps Mountain View, California-based Google deal with the 15 percent of queries a day it gets which its systems have never seen before, he said. For example, it’s adept at dealing with ambiguous queries, like, “What’s the title of the consumer at the highest level of a food chain?” And RankBrain’s usage of AI means it works differently than the other technologies in the search engine.
“The other signals, they’re all based on discoveries and insights that people in information retrieval have had, but there’s no learning,” Corrado said. […]
In the few months it has been deployed, RankBrain has become the third-most important signal contributing to the result of a search query, he said.
Rachel David, for The Guardian, asked two scientists and two artists for their views on robot creativity. My favorite is from Michael Osborne, associate professor in machine learning, University of Oxford:
Another problem is that it is difficult to automate the combination of ideas from many different sources that forms the source of much of human creativity: you might find inspiration from an interview with a neuroscientist in designing a new office layout. Putting some evidence to our thesis, we found, for both the UK or the US, that almost 90% of creative jobs are at low or no risk of automation.
“Your dishwasher is a robot,” Rubin said. “It used to be a chore you did in the sink. … There’s a lot of definitions [of artificial intelligence]. … The thing that’s going to be new is the part of the cloud that’s forming the intelligence from all of the information that’s coming back.” […]
[Rubin has started] Playground Global, a startup “incubator” that nurtures budding hardware companies. […]
Rubin, speaking about his departure from Google, said he questioned what he was going to do for the next 10 years of his life. “Am I going to fight for 1 or 2 percent market share [in mobile devices], or am I going to do 10 more Androids?” he said. Playground closed its fundraising efforts “yesterday literally,” Rubin said Wednesday, and will now have $300 million to invest in hardware companies.
Car dashboards seem drastically behind the times in terms of UI; they are unintuitive, cluttered with unnecessary information and, worst of all, distracting. […]
Somehow over the last 100 years we have accepted that complex dashboards — highlighting all the features and technologies in a car with individual controls — are essential.
In reality, we as drivers and car owners are living in a world of information overload. […] And yet, all this information is displayed persistently in my dashboard. […]
However, technology adoption in cars today is hitting an inflection point, and the UI model we have grown accustomed to cannot handle it. […]
We seem to be on the cusp of change, especially with Apple’s anticipated entry into the car market. The trend of direct control and complexity cannot continue; the industry needs a new vision for the dashboard. […]
Going forward, the industry should adopt a model of on-demand UI rather than direct control. In this model, information and controls would only be provided when needed.
He goes on to outline principles around concealing, anticipating, and personalizing the information shown. Good read.
The acquisition could help Apple’s efforts to bolster Siri, which is controlled by voice commands and sometimes struggles to understand users.
VocalIQ says on its website that its software helps computers speak more naturally by learning from each interaction with a human, employing an artificial-intelligence technique called deep learning. VocalIQ software also seeks to help computers better understand commands and their context.
Apple has ramped up its hiring of artificial intelligence experts, recruiting from PhD programs, posting dozens of job listings and greatly increasing the size of its AI staff, a review of hiring sites suggests and numerous sources confirm. […]
One former Apple employee in the area […] estimated the number of machine learning experts had tripled or quadrupled in the past few years. […]
Machine learning experts who want unfettered access to data tend to shy away from jobs at Apple, former employees say. […]
And some machine learning experts might be enticed by the challenge of matching Google’s smarts amid privacy constraints, suggested John Duchi, an assistant professor at Stanford University.
“New flavors of problems are exciting,” he said.
I’m hoping that there are more candidates “enticed by the challenge” than ones who “shy away”, but we’ll see. As I’ve said before, if there’s a list of Apple’s top-5 computing priorities for the next five years, I believe machine learning is on it.
1. Samsung Takes Smartwatch Fight to Apple Considering that there’s no release date, price, or launch market specified, I think this headline is 2X the overstatement that it would be anyway. It runs Tizen, by the way. A variant will have a 3G modem. In terms of the modem and call functionality, I’m sincerely looking forward to seeing how that performs and what consumers think.
Instead of relying on a static list of threats to protect you, it’ll actually watch out for suspicious app behavior. […]
“Snapdragon Smart Protect is engineered to look at the actual behavior of device applications in real time and almost instantly detect and classify any application behavior that is considered suspicious or anomalous,” Qualcomm wrote in a blog post. “Suspicious applications are classified into severity levels of malware […].
4. Apple partners with Cisco to boost enterprise business I get uneasy when I read things like this. There’s no meaningful consumer problem to solve here. And for Apple, what’s the worst down-side to *not* doing this? To me, it means X fewer people working on new products or helping existing customers have a meaningfully better experience.
One big problem, Messrs. Cook and Chambers said, is ensuring employees get adequate networking performance in the workplace. Apple and Cisco said they aim to establish a “fast lane” for iOS devices in the corporate world, prioritizing wireless and Web connections so critical business applications aren’t compromised by a streaming cat videos and other nonbusiness fare.
5. Xiaomi said to release notebook in 2016 with help from Inventec and Foxconn Will it use Windows? First Windows device of more to come?
- Unfortunately, Google is struggling with a number of issues that will limit its ability to keep Google Now far ahead of its competitors unless it moves fast.
- First. Its latest innovation Now on Tap (see here) which has the potential to meaningfully improve Google’s data collection, requires Android M to work.
- Google’s inability to update the software on its devices means that it could be 2017 or 2018 before Android M will be mainstream (see here).
- Second. Many of the core team who developed Google Now have left the company after their creation was folded into the core search business against their wishes.
- Cortana on Android is another move by Microsoft to make its ecosystem operating system agnostic, aiming instead to encourage users to like and spend time with its services.
- This is exactly the right strategy for Microsoft to become an ecosystem company but […] there is still an awful lot of work ahead.
“On a phone, the biggest intellectual difference is you don’t go to your search box as your first resort,” said Keith Rabois, a partner at the venture capital firm Khosla Ventures, who has invested in a search start-up called Relcy. “On a watch, it’s inconceivable that you would go to a search box perhaps at all.”
This is why Google and Apple are investing so intensely in advancing Google Now and Siri.
Machine learning is to 21st century devices as the graphical user interface was to 20th century computers [in terms of how] critical it will be to a high-performance product.
Kevin Fitchard, for GigaOm (when it was still operating) wrote an interesting piece on Qualcomm’s “Zeroth” technology, expected soon. It’s from March, but still very relevant.
New cognitive computing technology Zeroth […] aims to bring artificial intelligence out of the cloud and move it – or at least a limited version of it – into your phone. […] I sat down with Qualcomm SVP of product management Raj Talluri, who explained what Zeroth was all about. […]
Zeroth […] will perform basic intuitive tasks and anticipate your actions, thus eliminating many of the rudimentary steps required to operate the increasingly complex smartphone, Talluri explained. […]
The most basic use case would be taking better photos as it can optimize the shot for the types of objects in them. It could also populate photos with tons of useful metadata. Then you could build on that foundation with other applications. Your smartphone might recognize, for instance, that you’re taking a bunch of landscape and architecture shots in foreign locale and automatically upload them to a vacation album on Flickr. A selfie might automatically produce a Facebook post prompt. […]
Other examples of Zeroth applications include devices that could automatically adjust their power performance to the habits of its owner or scan its surroundings sensors to determine what a user’s most likely next smartphone action might be. […]
Zeroth itself isn’t a separate chip or component. It’s a software architecture designed to run across the different elements of Qualcomm’s Snapdragon processors […].
Exciting. It wouldn’t surprise me if Apple’s enhancements to Siri use similar technology. I’m not sure of the approach that Android, Google Play Services, or Android vendors will take. Perhaps, for instance, Google Now will simply take advantage of Zeroth capabilities for Google-Now-Relevant functionality, while the OEMs apply Zeroth to other consumer problems.
Chennapragada spelled out the three-pronged direction [for Google Now on Tap] — what she called the “bets” her team is taking. The first bet was embedding Now with Google’s full “Knowledge Graph” — the billions-thick Web of people, places and things and their many interconnections.
The second is context. Now groks both the user’s location and the myriad of signals from others in the same spot. If you enter a mall, Now will tailor cards to what people in that mall typically ask for. “Both your feet are at the mall. You shouldn’t have to spell it out,” Chennapragada said. “Why should I futz with the phone and wade through 15 screens?”
And this is where the third benchmark for Now comes in: Tying that context to the apps on your phone, or ones you have yet to download. In two years, Google has indexed some 50 billion links within apps. In April, it began listing install links to apps deemed relevant in search. Indexed apps will be included in Now on Tap when it arrives in the latest Android version this fall.
I’m looking forward to trying out Google Now on Tap.
My impressions from watching Apple’s developer conference keynote.
Siri and Machine Learning
- To-date, Siri improvements have been meaningful, but modest, especially if you recall that Siri debuted 3.5 years ago.
- This is the first keynote where Apple used the terms “machine learning” and “deep linking”.
- Between Siri intelligence (project Proactive), the News app, Apple’s data center build out, and competitive pressure from Google, my hypothesis is that Apple has put its foot down on machine learning and intelligence. And that doesn’t even take into account the machine learning Apple will need if it pursues a car.
If there’s a list of Apple’s top-5 computing priorities for the next five years, I believe machine learning is on it
- If there’s a list of Apple’s top-5 computing priorities for the next five years, I believe machine learning is on it. Mark Gurman, who writes for 9to5Mac, mentioned this on his appearance on The Talk Show:
A lot of this is really to tackle Google […] It’s very hard to just […] one day […] decide to drop Google search from your platform. [But] year over year, Apple is adding features that […] reduce the reliance on Google […] teaching the consumer that Google is not necessary.
- Still not convinced? How about this, from Apple’s iOS 9 preview page (bold emphasis is mine):
“Siri powers a more intelligent Search. […] [it’s] the technology that powers Search on your iPhone and iPad. And now you can get even more answers when you type in the Search field. […] A head start on every search. […] Your search screen is prepopulated […].”
Machine learning is to 21st century devices as the graphical user interface was to 20th century computers [in terms of how] critical it will be to a high-performance product
- I’m changing my mind about Google’s data-volume-based advantaged. I believe Apple sees a volume of (anonymized) user data that’s on the same order of magnitude as Google (on mobile). Google Now may provide Google with more question / intent data, but Apple sees the bigger picture of what consumers (in aggregate) do / need throughout the day. I base my belief on iOS’ huge installed base, high app downloads and usage, and Apple’s full-stack access to iOS devices.
- With so many dots to connect: Mac, iPhone, iPad, Apple Watch, Apple TV, Siri, Maps, News, HealthKit, HomeKit, and CarPlay, Apple will have great opportunities to add value to consumer’s daily life.
- Machine learning is to 21st century devices as the graphical user interface was to 20th century computers. I don’t mean that as a user interface metaphor, but as a way to express how critical it will be to a high-performance product.
- Apple is poised to deliver Siri’s proactive features to a broad user base very quickly:
- Math: iOS has a larger unified installed base X faster adoption of iOS releases X support for more legacy devices.
- Select unknowns:
- Impact of 3rd party support (to Google Now on Tap or Siri’s proactive features) to growth.
- Impact of Apple’s self-imposed privacy guidelines on the feature set / consumer uptake.
[Apple’s] odds of “proficiency” are high. But the odds of being better than Google are not great.
- So, can Apple develop world-class machine learning capability?
- At a super high level, let’s consider the large scale data products that Apple has developed in the past few years: Messages, Siri, Maps, iCloud, Apple Pay. Without getting into details here, the track record of these services is mixed, to say the least. So, odds of success with machine learning? Well, the odds of “proficiency” are high. But the odds of being better than Google are not great. Possible, but not great.
- Note, no one has achieve true “proactive” assistance yet. Here’s my very rough scale:
If you’re Samsung, Lenovo, or Xiaomi what do you do to differentiate from Apple or other Android OEMs?
- Which brings up another point: how soon until “Hey, Siri” works a) un-tethered and b) with the proactive features?
- For a comparison of Siri’s intelligence features vs. Google Now and Cortana, see this.
- Finally, all this is yet another reason why Android and Windows OEMs will perennially struggle to do more than stay on the treadmill. If you’re Samsung, Lenovo, or Xioami, what do you do to differentiate from Apple or other Android OEMs?
After Apple’s “Proactive” initiative leaked this week, these words from Google’s I/O keynote — during the reveal of “Google Now on Tap” — caught my attention:
Selective Hearing & Amplification from Google I/O
- Answers […] proactively
- Natural language understanding
- Things (as in, things recognized)
- Places (Google can recognize 100M places)
- Knowledge graph (Google has 1B entities)
- Neural nets (Google’s is 30 layers deep)
- Machine learning
Machine learning […] is going to be a critical [capability] for Apple
First, these are all related to, or enabled by, the bottom term: Machine Learning. It’s the ability for a computer to learn new things: shapes, patterns of behavior, relationships, and more. This is already a very important capability for Google, and is going to be a critical one for Apple, too. Why? Well, briefly, to enable Apple devices to make sense of the user’s context (location, activity, history, messages, related information, intent, etc.) and, in turn, to help the user achieve her objective, stated or implied. Things like catching a plane, buying a present, or meeting a friend. Or adjusting exercise frequency, sleep, or diet. The possibilities are many.
The figures [Google showed] speak to the […] massive, massive level of investment Google has made
Second, the figures Google mentioned — 30-layer-deep neural net, 100M places cataloged, 1B entities recognized — these are figures that not only speak to the utility that Google Now on Tap will have, they also imply the massive, massive level of investment Google has made. Investment in computing hardware (a good deal of it custom) and software (neural nets, understanding natural language, learning, user interface, etc.).
Finally, this is what Apple’s project Proactive — or anyone’s machine learning ambition — is up against. The question, for Apple is, does it compete head-to-head (symmetrically) or in a focused way (asymmetrically)? Probably the latter. Either way, I can’t wait to see.
Does Apple compete head-to-head […] or in focused way?
- Voice recognition
- Image recognition
- Machine learning
2. Justine Musk, previously wife of Elon Musk, writes good words about how to become an ‘extreme success’. I highly recommend this. Must-read.
3. Yole on Image Sensor Future. Especially relevant to machine sensing and learning, one of Google’s ‘three most important things’.