2.3 What are some examples of marketing AI in action?
2: How AI Works
2.3 What are some examples of marketing AI in action? - Video Tutorials & Practice Problems
Video duration:
16m
Play a video:
<v ->All right, let's look at some examples of AI in action,</v> especially machine learning examples, because we just talked about the types of machine learning. So here's one, improved contact management. So contact management is something that salespeople tend to do where they really want to know who in their list they should be following up with in order to keep those contacts warm. And so what's the input to that would be contacts from the salesperson's network. And so what are the outcomes? Well, what kind of data could we develop to say, hey, we could figure out that it was a good idea to suggest a certain person to follow up with? Well, one way to check it is to say, well, which people were suggested by the system that the salesperson actually followed through on, they completed that request to reach out to that person? Another way to look at it would be if they reached out to the person and the person actually responded back to them, or actually even completed a sale or completed a big sale, right? So all of this is data that we could use to try to decide how can we optimize who the people are that we tell the salesperson to reach out to? Well, what approach would you use for that? You'd use supervised machine learning. Now, why would you use that? Well, because you already have all this data. You know, based on people being asked to reach out to someone, which ones they actually did, which ones they got responses to and which one turned into sales and how big they are. So you have all this data, so you would be able to use this to be able to determine what are the features that seem to identify who the right type of people are to reach out to? You can just use supervised machine learning to do that because you have good outcome data. What about the example we gave before of identifying topics on a website? So you might want to know what subjects are being discussed on the website, so what's the input? Well, that's web pages from your website and what would you do with this? Well, you would use unsupervised machine learning. Now, why would you use unsupervised machine learning? Because what would happen is that you would want to go out and do a pattern analysis to try and understand how would we understand what patterns exist that say this content seems to be similar to this other content? Now you can then move into semi-supervised or active learning machine learning by getting user input. So they could say, well, I know you said this word was an interesting one for this pattern, but it's not. You should just get rid of it. Or there's some new words that you missed or this topic is stupid, just get rid of it, or you shouldn't have it at all. Or this pattern you found, these three actually should all be one topic, right? So you could actually take the unsupervised machine learning and turn it semi-supervised machine learning by adding user input. What about social media sentiment? Well, this is the example we showed in the previous lesson where what we were showing was what subjects are being discussed. And you can use text from the social media conversations and try and understand what the outcomes were. So if you're looking at sentiment, you're gonna say, hey, what was positive, what was negative? What was neutral? And then you can say we can use semi-supervised machine learning for that, just like we walked through in the last example to say, hey, these are the ones that the system has confidence about. These are the ones they're not confident about. And so let's use those to add more training data. How about higher effectiveness for your webpages, your web content? What you might want to know is what is it about our pages that actually work so we can do more pages like those? And so we'll use as input the web pages from our website and what kind of outcome data we would have is we would know, for example, the bounce rate for pages. So how often did people come to a page as the first page in their experience and then just leave? Or how often did they come to this page and then actually complete a conversion where maybe they bought something, where they filled out a contact form. How often did they share this page in social media? How many inbound links did they get to this page from other websites? And so these are all outcome data that exists for all these pages. And so if you know that these are the better pages, because they have better numbers on this outcome data, then you can use supervised machine learning to do the pattern analysis to tell you what are the patterns that the good pages have versus the bad pages? How about another example. Website content personalization. So when you think about the recommendations that Amazon does, when they say, hey, people who looked at this product also bought this other product, well, those are content recommendations. I mean, I know their products, but you can do it for web pages as well. So suppose you use the input of both your web pages and maybe you know something about the person. Maybe for example, on a B2B site, you know what industry they're from, and so maybe you're gonna show them other pages about their industry. Well, what kind of outcome data will you have? Well, you would know how often the recommendations are accepted. How often do they actually click on them? You might also know how often they actually convert. So how often did they sign up for a contact form or buy something based on clicking that recommendation? That leads you to think that we've got some objective data, so now you can look at supervised machine learning. So what other kinds of examples of AI can we look at? Well, Google is something that you use every day, and Google way back in 2012, started using AI in the search engine. It's possible that they used it before that, but they announced this release of their search engine called Panda back in 2012, which was the first time that everybody was really sure that they were using AI. And so they keep growing how they're using AI and it's constantly changing, but let's talk about why they went to AI and what it is that the AI actually did back in 2012. Well, there are folks called black hat SEO experts. So what is it that those black hat SEO experts are doing? Well, it's named after kind of the cowboy movies where the white hats were the heroes and the black hats were the villains. So it's kind of a weird way to describe people, but that's exactly what the parlance in the SEO industry turned into. And they were really trying to fool Google. They were trying to get Google to show pages for their clients when maybe that wasn't the right answer. So they're fooling Google, they're fooling searchers, and Google really was having a problem because if you can spam the search results, people are gonna be less likely to use Google. They won't use it as much, they won't trust it as much. That's really bad for Google's business. And because Google is depending on you using search constantly and clicking on those ads, they had to make sure that they were getting rid of spam, that these black hat SEO experts were trying to fool them with. And they weren't doing it with rules, so they had to move to AI. They had to move to this kind of probabilistic model. And so Panda basically was an improvement in the ranking algorithm. So what did they do to improve the ranking algorithm? Well, they used human beings to label search results as being good or bad. And they had them label a whole bunch of different things. So they said, hey, is it a good design? Is it a speedy site that had a really fast response? Is it quality content? Would you come back to this site for an answer to this question? And so they had people label all of that stuff and you might say, wow, how could you possibly label all of the webpages out there by search result? And the answer is they didn't. That's what the machine learning's for. What they did is they labeled a subset of the search results. So they picked out a pretty small subset of search results that had spam in it, and basically they relied on the human beings to identify what's spam versus what's not spam. So what's a good search result from what's a crappy search result? And then what they did is they used machine learning to run the pattern analysis so that pages that look like the spam pages get knocked down in the ratings and the other pages move up in the ratings. And so those rankings in the search results are actually being affected by human judgment of what has those features that humans liked versus the features that humans don't. And so machine learning was actually used to scale those human ratings, because even though Google has more money than God, they're not able to have humans rate every single page out there, so they use the machine learning to look for patterns. And if your site looks like a low rated site, well, that's a problem for you because your site gets ranked lower. And you might say that's not fair. Maybe I don't have a spam site. It's like, well, maybe you don't, but if your page is a crappy search result, that's still something Google wants to get rid of, even if it isn't technically spam. And so this is how Google started using machine learning in the ranking algorithm. It started to use human beings to rate the pages, and then it pushed things down in the search rankings that humans rated lower, not just the ones that they physically rated lower, but any page that resembled a page that was rated lower, and that's where the machine learning came in, was figuring out which pages resembled those pages. And so you might say, well, what is it my page would have done that looked like low quality? Well, what it's really doing is it's using feature analysis. And so what do we mean by feature analysis? Well, feature analysis could be almost any characteristic of the page. So it could be the length of the title tag. Maybe if you had a really long title or a really short title, maybe that was an indicator of low quality. Maybe the ratio of words to pictures. I mean, if you had a site that was full of advertising, maybe it had a lot more pictures on it than a another site might. And in fact, maybe those pictures might have been small, not big hero spots, but kind of small thumbnail type pictures, and so maybe that was a low quality site. Maybe you had a spam site that was copying all sorts of text from other sites, and that's how you were showing up in the search ranking. Well, if you had a lot of runs of words that were common with lots of other pages on the web, well, maybe that's how Google figured out that you were a low quality page because you were just content that wasn't unique. It was just copied from lots of other places. So there are hundreds and thousands of different of these features that are just different characteristics of your webpage that Google used to identify what the low quality pages were versus what the high quality pages were. And so what was the practical effect of this AI algorithm called Panda that Google introduced way back in 2012? Well, sites that ranked highly with the old algorithm might have been affected if they were actually low quality sites. So if your site was really good for search engines, but not for actual people, then you're actually in trouble. You are starting to get your pages ranked lower. And so what are the kinds of sites that got hit by the Panda update? Well, sites called content farms. You don't hear about content farms very much today because of Panda. Content farms were just ways of churning out copies of sites over and over again, that had lots of good words on it. They had lots of keywords for SEO on them, but they weren't actually really good sites. They were duplicate sites with lots of content on it, and lots of ads on them. Older content didn't do that well. So if someone was asking a question where they needed an up-to-date answer and they were getting older pages, well, those humans didn't like those older pages, so for some types of searches where you need up-to-date content, they would rank the older content lower. Sites loaded with ads, as we talked about. A vertical search site, so for example, if you search for something and what you got was the result page from another search engine. So suppose what you did is you searched for airline flights to Tahiti, and what you got was a page that when you clicked on it, it did a search for airline flights to Tahiti in their particular search engine. That wasn't something humans liked. They wanted to actually see the page that that search engine would come up with, and so they wanted to skip that step. And so those things got hit as well. And I'm gonna tell you that this wasn't just bad luck, because if you were focused on your search optimization so that it was helping everyone in the equation, it was helping you, the search marketer, it was also helping Google and it was also helping the searchers, well, then those things got rewarded. But if you were doing things that were only helping you, and you're trying to fool Google and fool the searchers, well then eventually Google figured out how to make that not work for you, and it was that Panda algorithm that used machine learning that was your death knell, because you were actually trying to cheat the system. And this is really what we want you to think about. Is that this was a way for Google to figure out which pages were high quality and low quality that previously, only human beings could do. But now they used human beings to label the content so that machine learning could at scale figure that out on the fly. And that changed the whole search engine optimization industry so that those black hat SEO experts really didn't have much success anymore. Now, Google's AI didn't stop in 2012, as you might expect. It keeps moving. They've moved on to use something called word embeddings, where they actually know what words are similar to other words. So they know, for example, that royalty, king, queen all mean about the same thing. And there's word embeddings that actually use machine learning to say how far away in meaning a certain word is from another word, and they use that so that when you search for something, what's happening is that you're using that kind of deep learning approach to generalize your search term. So just as we generalize the stop signs in the training data, if Google knows that royalty, king and queen are kind of similar to each other, then that's broadening the search term. It's broadening the training data the same way we did with other deep learning techniques. Just recently, Google introduced something called the Multitask Unified Model, called MUM, which actually understands 75 different languages. So now, instead of having to train a model on each different language individually, it can take things trained in any language and apply it to any other language, which is a huge breakthrough. And what MUM tries to do is to anticipate the next searches that you'll do. So not just what's a good answer to this search that you typed in, but what's the thing people usually type in next? So in this example that we have on the screen, if you type in travel to Beijing, and that's what you're searching for, it'll not only show the top sites for Beijing, which was the old answer, but it also will show travel advisories, budgeting information. These are the things that people tend to search for next after they typed in travel to Beijing. So these are just some of many examples of seeing AI in action, and I hope that they were helpful to kind of bring AI to life for you.