Imagine a platform where you were paid to write good quality content, your content was valued according to its quality, rather than virality.
That doesn’t exist right now on the web.
Factmata helps monitor internet content and analyse its risks and threats. Their technology can automatically extract relevant claims, arguments and opinions, and identify threatening, growing narratives about any issue, brand, or product. Their tools save time for online media analysts, finding new opportunities, risks and threats.
It might be easy for someone like you or me to identify biased or offensive content when we see it, but how do we scale that up to the billions of tweets, articles, and posts made each day on the internet?
This is where Factmata’s products come in. Factmata uses Diffbot to extract clean structured data from articles throughout the web, which is then funneled into their natural language algorithm for classification across 19 signals of possible harmfulness.
As a marketer, I’m always looking out for ways to improve ad spend and conversion. In this interview, Dhruv shares his thoughts about why content quality and fake news are important signals that advertisers might want to pay attention to. Full transcript of the conversation below
[00:00:00] Jerome Choo: So yeah, it’s the end of your day right now. I’m assuming. So I appreciate you making this time.
I’ve been doing so much research into Factmata, yourself, and I’m really excited for this.
I am genuinely curious about, I’ve talked about several people. So, my sort of philosophical question, is how do we go from this engagement model where Facebook and the Twitters and the social medias of the world that are trying to drive this engagement. And so they create these echo chambers of content that could be false how do we change that model a little bit? So it gets social media companies to want to prioritize, finding, identifying false information.
[00:01:37] Dhruv Ghulati: So I actually personally think that Twitter and Facebook are actually doing quite a lot on fake news and misinformation right now. Um, definitely like was not the case probably about four years ago or even two years ago. Um, but they they’ve invested billions of dollars on this problem and they they’ve hired hundreds of thousands, like probably thousands of people in these departments, they made a couple of acquisitions. Um, I think the issue is not that they’re like taking a stand on these things. It’s, it’s more that like, we need an entirely new system, right?
We need a system that where the, the platform is monetized differently to advertising, I think, um, and I think we also need a platform which enables like, thinks about new completely new product features and new, um, flows of the platform where users are incentivized to create their own ranking systems. Um, uh, users are encouraged to like take content down or flag it. Um, uh, there is, there is inherently an incentive for people on the platform to create quality.
Right, right. That I think is a really important point. You know, like imagine a platform where you were paid to write good quality content, your content was valued according to its quality, rather than virality. That doesn’t exist right now on, on the web. In fact, like if you write a piece of content, you’re rewarded by the number of claps or the number of likes or shares that content gets, they’re not actually evaluated on the quality of the piece.
Because there’s no measurement system for quality or . Fakeness or how well balanced the pieces or so that’s kind of re just going back to Factmata. That goes back to what we were trying to build. And we started with this, this vision of a quality score for a piece of content.
[00:03:32] Jerome Choo: You know, I think you mentioned earlier that, um, advertisers are not super interested in quality of content, but like, They should be. Are they interested a little bit on like where their ads are being placed now?
[00:03:44] Dhruv Ghulati: Yeah, it is definitely starting now. I’d say like, definitely the beginning of the business, like that, wasn’t a key concern for advertisers.
Um, we’re now 4 years into building Factmata. And now we’re at this point where we are about to start two really major kind of contracts with, with, with a couple of ad platforms and we’re getting a lot of inbound interest, um, for our ad offering. Um, Uh, and, and, and essentially, you know, it’s basically preventing an advertiser from placing an ad next to some unsavory content.
There are like some studies that have shown that, that, you know, users are kind of, I think there’s like 27% less likely to buy, um, from, from brands place on, on fake news sites. Um, But, uh, , it’s taken some time to get that, that data can actually do some real tests and studies.
Advertisers focused more on not placing ads on really hateful things. And now we’re moving into like, not placing ads on like COVID misinformation, for example.
So misinformation, disinformation is being this like really weird, like gray area.
People don’t know where to put that bucket. You know, whether, whether it’s like, is it in hate speech is in harmful content? Is it something else? Like, is it way too subjective and gray area? Do you not? Do we not want to touch this? Like, and so that’s why I think the roadmaps of these companies have taken so long to actually like build a consistent plan, deal with it because it’s just not, um, it’s hard to like see where this fits in to like the overall priorities of the company.
So there have been, um, you know, from advertisers that we’ve spoken to and, and, um, that there is, um, there was a growing debate around the value of the open web effectively. Like, is, is it valuable for a brand that’s like really reputable to try and advertise to all these random niche websites and go open just like programmatic web rather than pick 50 sites that they know are really good quality.
And then just only advertise on those. You kind of like saying like, do we, do we want to take the risk of like going for massive reach, um, versus just like kind of going small.
So there is that debate. That basically if you are at a really good quality article and they have like the, obviously hard-coded, so they have a list of like New York times equals good quality article.
They have done studies that have shown that like people actually go on the page, they hover on the actual page. They actually read the article. But I think my more like 1.3 seconds more or something, um, which, which is great for an advertisers to know that, because it means that it’s it’s proof that they should advertise on a high quality page because they’re on the page more.
Right. They’re not just like clicking off it. Um, so that’s, that’s a really helpful study to advance the cause. Like. Because it’s showing that you can actually make money from advertising on quality content, because it means your ads are more viewable. And then the second thing is, um,
Some people were also talking about kind of, well, actually, if we advertise on high quality articles, um, we’re actually like likely to hit a higher income audience.
Right. The audience that just like, like earns more, right. Um, people who read the New York times or, or whatever, the new Yorker or something like this. And, um, and thus, you know, actually for some brands that actually might be a good thing to make sure that they’re only hitting those pages because they, um, you know, the, the buying power of the actual.
People who are clicking, the ads is higher. So they’re likely ahead of those in is higher.
Ultimately I think, I think about the entire problem around fake news as a business model problem. Right? So if we can start to have these kinds of insights, data, like proof points that show that like fake news is really bad for business.
Um, and then I think we’ll be able to solve the problem.
[00:07:32] Jerome Choo: Right, right. Yeah. I think what’s really powerful. That Factmata has created is just the fact that you can measure this now objectively, right. Rather than just having a human quote-unquote rate, this, um, which could wax or wane, depending on maybe how they’re feeling for the day, uh, or the group that you’ve hired to, to, to look at the data.
[00:07:51] Dhruv Ghulati: So we actually have a customer that’s writing articles on the Facebook platform. , and they basically use us to detect if their headlines are clickbait or not. And so every single article that’s written on their platform essentially goes through our checks and they basically found that if they were writing clickbait, they’d be banned from Facebook.
So they’re actually paying us to prevent. You know, like prevent that’s great. So in a way it’s a great use case. I like, because it’s helping them win business. Right. It’s good for them to not get banned, obviously. Um, and they’re paying us for that benefit.
So it’s like a win-win for everyone.
I’m definitely very proud about where we are right now as a business we have now 19 different signals, um, on the quality of online content, we used to have like six, um, but we have 19 very high quality, natural language processing classifiers All of them are state-of-the-art they’re beating benchmarks each one of them. Um, and public benchmarks, research benchmarks, um, for detecting things like controversy of a piece of content detecting the objectivity of a piece of content. Um, things like hate speech detection, sexism, racism, um, hyper-partisanship detection.
So we have about 19 of these different classifiers. Um, and you know, it’s, you know, it’s taken us kind of three, four years to perfect them, but what we’re releasing this year is essentially a, we’re not calling it a quality score. We’re calling it a threat score. Um, but that will be combining every single model that we built through our history into one single score.
[00:09:35] Jerome Choo: So we’ve got about two minutes left. Um, what’s, what’s what’s next for Factmata. has your like five, 10 year vision changed for Factmata at all or, uh, based on current events?
[00:09:47] Dhruv Ghulati: So the 10 year plan was always like step one, prove that there’s enterprise value. Um, and we’re starting to do that this year. You know, we’re, we’re about to close some really big contracts and deals with enterprises in different, um, markets early next year, we’ll be launching this product, which will be kind of trying to move us into much more of a mass market kind of system, where anyone can use the product.
And that was the goal of this phase two, which is like building a kind of social. Viral kind of, um, uh, manifestation of the fact that, um, it’s, it’s probably not going to be a social media platform, which is what we had in mind at the beginning, but it’s definitely going to be a way to access the actual engine that we’ve built and allow that for anyone.
And then phase three is about distribution of our engine into other platforms like Facebook and Twitter and, you know, Diffbot and then so on. So, uh, it phase three is more about integrations and, um, and you know, obviously, like it’s difficult to have a 10 year vision on the, on the web.
Um, that’s the vision that we want to stick to because I think it will, I knew that it would take some time for big platforms to want to integrate, let’s say a quality score or a safety score threat score, um, because they, you know, want to build stuff themselves or, you know, like there isn’t a market for it, or it’s too much work to change their ranking system or whatever it is. But I think in time, you know, we’re going to see much more interest in integrations with this sort of product. Yeah, that takes us back to the beginning of this, uh, this meeting, right? Just, just what, what kind of model does it take for a business like a social media company to, uh, actually start prioritizing and optimizing for threat scores, for, or reduction of threat scores for quality of content, um, more objective factual content.
And that’s why maybe the 10 year plan was quite, quite prescient. Because. It might actually take, I thought it might take 10 years for them to like, have to integrate a new ranking system. So it might be like in five years time where we’re in phase three of the 10 year plan, for Factmata, like there’s like regulation, that’s like forcing the platforms.
[00:11:59] Jerome Choo: Thanks so much for your time. I appreciate it.
[00:12:01] Dhruv Ghulati: Yeah, appreciate it. Yeah. And, uh, yeah, looking like thanks for the time and looking forward to seeing, seeing the video
[00:12:08] Jerome Choo: all right. Take care. Have a good evening.