What if companies could automate almost all of the hiring process AND make better hiring decisions? In this edition of the UpTech Report, we meet with Omer Molad, the CEO of Vervoe to discuss automated recruitment, applicant tracking systems, and a brighter future for job seekers in the information economy.
Forget about all of those boring, conversation-based interviews of the past. Through a carefully constructed series of skill-based questions and online tasks, Vervoe is bringing AI and machine learning into the human resources department, radically transforming the way that businesses find new talent.
Host of UpTech, Alexander Ferguson learns that this is not just a simple resume screening process. Instead of hiring solely based upon someone’s background and the number of years they’ve spent working, Vervoe allows companies to find the candidate that is actually best able to perform the specific tasks that
TRANSCRIPT
DISCLAIMER: Below is an AI generated transcript. There could be a few typos but it should be at least 90% accurate. Watch video or listen to the podcast for the full experience!
Omer Molad 00:00
As we sort of get more and more efficient with technology, we get better and better at slicing and dicing people’s backgrounds, but we’re really just doing the same things but on steroids.
Alexander Ferguson 00:17
Welcome to UpTech Report. This is our applied tech series. UpTech Report is sponsored by TeraLeap. Learn how to leverage the power of video at teraleap.io. Today, I’m excited to be joined by my guest, Omer Molad, who is based in Melbourne, Australia. He’s the co founder and CEO at Vervoe. Welcome Omer, good to have you on.
Omer Molad 00:36
Hey, Alexander. Thanks for having me. Good to be here.
Alexander Ferguson 00:39
Now, Vervoe is a skill testing platform. I’m excited to dig into that, unpack it. But I’m going to start with Can you share what is the problem that you see in the marketplace and that you’re set to solve? The problem is that historically, most hiring decisions have been made based on the wrong signals the wrong information, particularly people’s backgrounds. And in addition to that, being unfair, and excluding people who would otherwise be great at the job, it’s it’s also not productive and inefficient, because you’re, you’re not hiring based on the only thing that matters, someone’s ability to do the job. Instead, you’re focusing on how long they worked at a certain company, or where they went to school, or
Omer Molad 01:30
maybe not deliberately, but how familiar they seem to you and how comfortable you are with that, that type of person and their background. And those sorts of things. And as we sort of get more and more efficient with technology, we get better and better at slicing and dicing people’s backgrounds, but we’re really just doing the same things, but on steroids. And so what we’re about is breaking that down. And and and sort of getting back to the fundamentals of what can someone do the job, it doesn’t matter if they’re 50 years old, or 20 years old, doesn’t matter. Their gender doesn’t matter if they went to Harvard, or they’re self taught. But can they actually do the job? And let’s see them do that. So according
Alexander Ferguson 02:13
to your website, it says verbose AI automatically assesses grades and ranks job candidates based on how well they can actually do the job. And that’s everyone loves to do it, how well can you do this job? But how does that actually work? How can you explain?
Omer Molad 02:28
So think about if you were a restaurant owner, and you wanted to hire someone to work in the kitchen, or you wanted to hire someone to work as a waiter in in that restaurant. So the best way to do that is to actually have them come in and do a shift. And then you’ll know everything you need to know to see how they interact with others, how they work with customers, how they make an omelet, whatever it is that and you get to say that in your environment. So if it’s at a Chick fil A, or a, you know, a Nando’s, that’s going to be very different to like a five star restaurants are context matters. And so now Now imagine that you have a vacancy, and you have 200 people applying for that vacancy, you can’t bring all 200 in to spend to do a shift with you. And so what you end up doing is eliminating, you know, 199 of them or make probably eliminating 195 and interviewing five and which is the traditional hiring process. And so what we do is we let you bring that person in for a shift, but we do that virtually, we do the digital version of that. And and so that’s kind of the the principle we’re not trying to replicate interviewing, which is basically chat. We’re trying to replicate the concept of a job trial or an audition and digitize that. And how we do that practically is we’ve we’ve created a way for candidates to engage in scenarios that are relevant to different jobs on the internet, and then analyze how they respond and sort through that and rank
Alexander Ferguson 04:15
them. Digital egg flipping, but I’m curious, like, what are the actual jobs ready skills that are being tested for in this process?
Omer Molad 04:25
Yeah, so it can really be anything but let’s take some examples. It can be a design role, where you actually design the logo, or design an onboarding flow and then answer a bunch of questions around why did you do it this way? And maybe a number of knowledge questions in a sales role. It might be doing a presentation or doing a mock cold call or writing prospecting emails and might be then facing a challenging situation with a customer. I mean a customer service role it can be a range of customer situations in a healthcare role it might be dealing with you know, someone who can’t hear you? And how do you actually and you have to administer medication? And and how do you sort of overcome that. And it’s not, it’s not the theory, it’s the practice of asking people putting them in that scenario and then getting them to do it in a finance job, it might be getting someone to do something in Excel and do a valuation to a discounted cash flow reconcile the ledger. So it’s getting as close as possible to actually performing tasks with obviously there are there are some limitations performing cardiothoracic surgeries, you know, a little bit more challenging on the internet, but, but for most jobs, and most particularly jobs that are, you know, performed in an office or what we call knowledge worker type roles. You know, it’s actually quite easy to test in a in a digital setting. It if
Alexander Ferguson 05:56
I look at the different levels, I think we talked about this. Last time we we chatted of unskilled labor, semi skilled labor, skilled labor, astrophysicist and CEO level, you don’t play at all that where you really focus on the semi skilled and skilled labor, is that correct?
Omer Molad 06:12
That’s right. And that’s about two thirds of the markets are semi skilled is roles that are typically the higher volume roles. So hospitality, high volume, call center, retail warehouse, those kind of roles were typically not always hourly workers can be hourly workers can be permanent, salaried workers, but typically, this is of higher volume, type roles, still a degree of skill that, that you want to test, typically, shorter tenure in an organization. And, and so with those type of roles, the primary problem that employers have is efficiency. So they’ve got a high volume of, of applicants to sort through, and they want to be able to do that quickly and efficiently. And typically, the method is that, you know, seven to 15 minutes, sort of shorter form assessment, or what we call the top of the hiring funnel, which means the point of application. And then in the skilled space is typically more highly skilled people, often known as like white collar, not not a term we love using, but knowledge workers. So software engineers, enterprise salespeople, these sort of people, but also healthcare, more technical roles, typically performed in an office today, more so at home remotely, but those kinds of roles, not a high volume of applicants. So you might be dealing with five to 30 people, but a more rigorous assessment, so testing, coding skills, behaviors, you know, on the job skills, but in context of the job, just like we discuss, you know, selling umbrellas is different to selling enterprise software. So, so there’s no such thing as sales, there’s sales in context, and prospecting is different from closing and, and being a Registered Nursing, you know, emergency and emergency department is different from a personal care in aged care, and so on. And so we put people in. So these are more rigorous, sort of advanced assessments for these kind of more highly skilled roles.
Alexander Ferguson 08:28
This is a unique point that I want to bring out again, you said, you don’t want just any salesperson you want the right salesperson for what you are trying to sell or the right healthcare person for what you’re trying to do your product. It’s a no code solution. I mean, like, you don’t need a developer to go in and use it, they should be able to create their own workflows from your, your templates of of the skill, testing workflows.
Omer Molad 08:54
That’s right. So 90 plus percent of our clients start hiring same day. You know, as they’re on boarded, there’s some exceptions in the enterprise when they want a highly customized workflow or highly customized assessments. But But generally speaking, it’s it can be plug and play, if you want it to be and, and you typically either leveraging so an existing assessments or from our from our library, or or most commonly, you’re taking something existing and tweaking and customizing it to suit to suit your needs. And we’re also learning from your preferences. So we help you train our models, and this is where the context comes in. So if we had, you know, a graphic designer role and we had 100 people complete a design challenge for that role. A series A startup versus Deloitte would probably have different preferences for For about the kind of person that they want and how they want them to perform that role, even though it’s the same job title, right? So in a start up, it will be why haven’t you shipped this already. And at Deloitte, it would be let’s have a couple more meetings and discuss and present and have a committee and then probably not decide much and revise. And, you know, and so, I say that with affection, because I’ve worked in big enterprise for years. And so, you know, those companies will index for slightly different things, we would learn from their preferences, and we would adjust. So what that means is, it doesn’t mean that the worst candidate for the startup is going to be the best for Deloitte. But there’ll be ranked slightly differently. And that matters, context matters.
Alexander Ferguson 10:43
The concept of automating streamlining, finding the right candidate, how much is really automated, like if they’re taking these skill tests? How far is is your AI assessing and grading design? Let’s take that example a design person, and they do a test and they’ve designed something. Where does the handoff happen?
Omer Molad 11:04
Yeah, that’s a great question. It really depends on the role and also depends on how the company wants to do things. But so in some cases, it would be, we would probably knock out the all the top of funnel, so the resume and the phone screening and even a chunk of the interviewing. So typically, we would replace about two thirds of interviewing as well. So the interview ratio would interviewed a higher ratio would reduce by two thirds. So essentially, instead of interviewing, you know, 10, people, you’d introduce three, and then higher one in three. And, and you would then go from, so it says you have 100 applicants, you would do 10 interviews, now you’re going to have 100, you’ll interview three, and you’ll hire one, but also, not only interviewing fewer, the interviews are very different, you don’t have to verify that they can do the job during the interview, you’ve done that already. So in the interview can actually sell to the candidate, which is more important, particularly in today’s market. And and you can focus on, are we a good fit for each other? How do we set you up for success in the role, those kind of things. Now, some companies use us all the way to offer So particularly for those semi skilled higher volume roles. They automate absolutely everything. But as the role gets more skilled, there’s an inverse relationship between sort of skill and automation. So they’ll be, you know, the assessment will be a reference point. But but you’ll also want to spend more time so let’s say you’re hiring a VP of sales, you wouldn’t expect to automate the majority of that hiring process, nor would the candidate expect that, that the assessment might be a component of that. And then you probably want to have a series of discussions before you make a senior hire. The key point is that for the highly skilled roles, the problem we solve, we call it confidence, whereas for the semi skilled high volume, it’s more efficiency. So for high skilled roles is not an efficiency problem. Right time is not the issue your heart, you’re making a very senior hire sorry about saving time, it’s about having confidence to make the right choice. And we help you reduce inflammation of symmetry by giving you a really strong signal about functionally being able to do the job.
Alexander Ferguson 13:31
The information that it’s providing highly skilled, it’s I appreciate that point. It’s like you’re giving that confidence score, versus more of an automating for the higher volume. Is it giving a number? Is it simply saying alright, this person is a 72%? On their salesman ability?
Omer Molad 13:50
Yes, but and so. So we, in each assessment, we test a number of skill groups. And they can be what we call job specific skills, often known as hard skills. Or they can be general work skills, skills applicable to many jobs, often known as soft skills behavior. So and you can design you can determine what you want to test, it can be assertiveness, and it can also be negotiation. Okay. And so we’ll give a score for each skill group. But that score is not some global benchmark, because our assessments are not out of the box. Those scores are determined by their relevance to the sort of main group in that cohort that you’re testing. Right. So that’s important point. So so it’s not some 76. That may it’s not like doing your essay Ts and you know what that means? It’s relative to the other people doing that assessment and relative to how hard that assessment is and how you train the model. So you can for example, You know, we can discover, let’s say we put 100 people through an assessment, and the vast majority of them get over 80. That tells us that the assessments probably too easy for that role. And we’ll want to a change the assessment and be, we’re probably in the way that we’re grading people, and training the models, there’s something to do there as well. And so really, you typically want some sort of normal distribution. And then when you’ve optimized the models, the scores can mean a great deal, because you’ve got a sort of benchmark that you’re working to that makes sense for that assessment.
Alexander Ferguson 15:40
You’re, there’s a lot to unpack, from what you just shared, but both from the the concept of what you’re assessing soft skills, hard skills, we’ll come back to that. But you also talked about this model, how it is varied based on this different pieces, and you use the word model, and I’m wondering if I might scare a few people of like, Wait, am I having to manage this model? I don’t know anything about data science and machine learning? And how does this work? How much does someone need to understand this? And how much does is it just just does it? What’s that, that bad?
Omer Molad 16:13
So the the amount of understanding that you need to have as a user is zero. In terms of data science, what you do need to understand really well and what we expect you to understand is, what a good what a good hire looks like for your company and for your role, right? So if you’re a hiring manager or your recruiter, we expect you know that we don’t expect that you’ll know how to test for that. And we don’t expect that you’ll know anything about AI and machine learning all this other nonsense. Like that’s just like so. So we expect that you can tell us, what do you care about in an enterprise seller or in a designer or in a cook or in a delivery driver, we expect that you’ll know. And you might not know to do that in the proper jargon or lingo term, but you’ll know how to describe it in human terms. And if you can do that, we do the rest. Okay, so and the way that we, we make it easy for you to tell us what you care about, we asked you to describe the skills for the role that generates the assessment, you can then review it, you can then chain swap questions in and out, then you approve that, then the candidates complete it. And then we select a sample automatically a sample of responses that are anonymized, and we asked you to grade them. And we say, hey, is this a good response? Yes or no. And we learn what you care about. Right? So we don’t we don’t talk about you won’t see anywhere inside our product, machine learning all these kind of things. We just say, hey, click here to optimize the grading. And we get you to respond to look at like, I don’t know, you know, might be 18 responses or something. It depends. And we learn from you. And good answer. Media man’s a bad answer. And then and then we go, now we know what you care about, we can now get closer and closer and closer to how you would grade yourself without the technology. But we’ll do it for you while you sleep.
Alexander Ferguson 18:08
What this makes it interesting is I made a hire early this year, and I actually gave to someone Hey, can you go make this job post. And then they came back and there was over 100 applicants who were just trying to find hire one person. And and they’re like, I’m not even really sure because they were doing it for me who was a good person. So they showed me a couple ones. I’m like, Well, I like this, I don’t like that I like this, then they had to go through the rest of do it. They didn’t actually do good job, I ended up having just go through them all myself. And this idea makes it sound like you’re going through a similar process. You’re just training an AI to say, I like this, I don’t like this, I like this. And then it can accurately find out
Omer Molad 18:45
based on tasks not based on based on how people do tasks. And in an anonymized way we don’t, we don’t care like what you look like your name your background to the set. So if you think about what I described how we’re optimizing these models, how we’re teaching the models, what we care about, it’s purely based on how someone responded to a question or perform the task that’s relevant to the role that I don’t even know who did it. They don’t even know the candidate’s name. So they training the model by saying well, this person responded well to this scenario, this person didn’t. And and so that surfaces the people who are the best performers of the job, not the people who you so when I say like, or you know, it’s really the answers that you like, not the person that you like, that’s important distinction
Alexander Ferguson 19:35
is very important. I’m glad you raised that because there’s a lot of concern and fear, I think around bringing AI into the recruitment space because of bias and in missing things and etc. But it sounds like you’re trying to to bypass the bias and get right to ideally the right candidates
Omer Molad 19:57
it Yeah, and we’re not and we’re also not prescriptive. So we don’t say Hi, Bob, we say, Look, you know, there are two kinds of inputs. The first is the assessment itself. So which, which, as a user, you have you sign that off. So this is what we’re doing, what we’re testing how we’re testing. And then the second is the grading framework, which you as a user, optimize. And then what you get is a stack rank of candidates, and then you can decide what to do with that. So you can then decide anyone over 75, I’m progressing to the next stage, it could be on site interview, right? That’s up to you. You could also say, I’m going to talk to everyone until you get comfortable. We don’t say hire Bob, Bob is 82%. Or Bob suits a role in sales. What we say is based on the role that you’re filling, and the criteria, you set your competency framework, the skills you said are important. This is who performed well. Okay, based on the grading criteria, you told us matter for you. That’s that’s essentially what we’re doing. But based on performance, that’s what matters. So we don’t know. It’s it’s full equalisation that the Harvard graduate and the self taught person, they’re on a level footing, coming into the assessment. It’s just how you perform if you and I did the same assessment, the software doesn’t know the difference between us. It just it just knows how we actually engage with the tasks.
Alexander Ferguson 21:30
Coming back to this this hard, hard skills and soft skills, we kind of like talked about, okay, this house, someone could answer, but could you give me some examples of each one of how you guys have delivered that type of testing for hard skills?
Omer Molad 21:46
Let’s talk about let’s talk about resilience. I think most people as a skill, some people think it’s a personality trait. But personality traits are really things that don’t change over the course of our lifetime. And skills are typically things that can be learned and can can be influenced in change. Now, if I said you resilience, or to most people, the word resilience, you probably got a sense of what that means. You might think I’m tough, I can bounce back from things. That’s the kind of connotation now. So how do we test for resilience? Now, let’s say that we’re talking about a sales development rep. So someone who’s doing cold calls, and they need to be resilient because they’re going to handle a lot. And so they need to kind of cut bounce back and keep so you can test for that in the context of that role. You can put people in these kinds of scenarios where, you know, they know a lot and see how they what do they do and how do they respond? Okay, now, let’s say that your 911, emergency operator, okay? And you’re constantly getting phone calls of people that have been stabbed or heart attack and that kind of stuff. You also need resilience. But it’s very different resilience to the sales person who’s doing prospecting, which is kind of a bit of a game. Right? Yeah. Okay. So if we’re going to now evaluate someone’s suitability to be an emergency response, you know, to work in an emergency call center environment, well, we’re probably going to test their resilience differently, we’re going to test it in that context, not the context of the cold calling. person, right. And so when we talk about soft skills, we always caution people like don’t generalize, don’t think about are they resilient, think about how they’re resilient in your context. I’ll give you another example attention to detail. I don’t consider myself having as having good attention to detail. But some people think I have really good attention to detail. You know why? Because I think for a CEO, I have Okay, attention to detail. For a librarian, I have terrible attention to detail. Right. And so, you know, the level of attention to detail you need in my job is different to a CFO and different to a librarian and different to a data entry person, completely different. So I come back to this point of context matters. So let’s not talk about attention to detail. Let’s talk about attention to detail in this job, which can be very different for each job.
Alexander Ferguson 24:22
When When, when you’re looking at the actual assessment to and I appreciate the distinction between in that job role or in this job and you’re looking at the actual assessment is it is simply like a multiple choice question of No, how would you how was that what does that look like?
Omer Molad 24:38
Yeah, it’s usually mostly open ended. I mean, it can be multiple choice, but we have quite a broad spectrum of formats for both the question and the answer. So the question could be a text question could also be a video where you ask the candidate to watch a video. The answer can be anything from written, recorded over vo audio, it can be documents, spreadsheets, it can be presentations. It can be images.
Alexander Ferguson 25:01
If someone’s writing up a a paragraph of a response or recording a video, you’re saying your AI can review that and get an assessment from it?
Omer Molad 25:11
That’s correct. That’s correct. So essentially, even and I’ll go even further. So analyzing written responses is easy. Now think about analyzing responses where there’s no right or wrong answer, like designing a logo. Right? What’s a good logo? Who knows. So essentially, we have three ways. And this is a, we could have a whole other session just on this, I’ll try and keep it as dumbed down as much as possible and keep it really simple. So essentially, the out machine learning models learning in three ways. The first is there’s a macro data set that’s constantly evolving. So we look at all responses to all from all candidates, all assessments, and we look at how they graded by all employers, and we detect patterns. And there are a lot of trends and patterns that are influenced by how people are responding, not so much what they’re responding with. So think about, if you run an assessment center, and you got a one sided mirror, and you’re observing people, and you’re not actually hearing exactly what they’re saying, you can learn a lot from that. So we do the digital version of that. The second is the substantive response itself. So we look at and we analyze transcripts of videos, we have like suggested answers. So it’s the actual correct versions of correct answers, which we can compare to. So now you’re getting much more granular. And then then the third, which I touched on earlier, is the preferences. So we then where each company, then teaches us what it cares about. And so off the bat, we get to about 80% accuracy. And what that means is within 20% of how you’d grade yourself without learning without knowing anything about your your preferences, so for any assessment, and then the rest of the 20% is based on your preferences where you teach us and and we optimize from 80 to theoretically 99 point, whatever, but it really depends at that point. And so we’re getting to the goal for us. Again, it’s not to say higher Bob, it’s to say, Hey, if you were to grade these 100 candidates manually, this is how you would do it, we got you’re really close to that. Now take the top chunk of people and do whatever you want with them, that that’s what we’re trying to do. And that’s where the automation comes in. We have an invented some grading matrix, we don’t know what’s good, you know what’s good.
Alexander Ferguson 27:44
All right, you’re really fascinated right at this point. And and I appreciate you keeping it brief, but I am fascinated whether you’re able to just look at complete text answers or even video responses and transcribe and then review it. And based off of these three levels, the macro, the specific and then fine tune the final 20% off of the individual users. I haven’t quite seen that anywhere else. And I’m fine,
Omer Molad 28:08
because no one else that’s because no one else does that. And they
Alexander Ferguson 28:13
say with a smile like Yeah, I know. The if I can for a moment, what about you said Excel spreadsheets or doing Excel or or doing something else? For those designing an image? Are you able to review those? Like how does that work? Yeah,
Omer Molad 28:31
so. So it depends on the media format. So there are certain things that we’ll be able to look at substantive responses, there are other things that we’ll look at more than behavioral sort of the signals around how you perform the task, like how long did it take you what ordered you to do it in keystrokes, these kind of things? It depends. And in Excel, we can look at how or it’s actually Google Sheets. So we’ve embedded the Google App Suite. But first, let’s call it spreadsheets. We can look at how you’re engaging with the task we can then also look at, they can be correct answers. So we can say if you answered seven, it’s the same if you answered nine, you’ll get this many points. You can set it up in that way. So it depends sort of how how specific you want to get to same with code challenges you can have like sort of actually correct output. So you can run code and either works or doesn’t or makes a triangle or a dozen, but you could also look at the end get how people you know, the quality, the sort of the quality of the code, how you know how the length eras, how people are spending their time you can analyze the sort of the method in addition to the substantive answer, and then you learn and overtime you figure out like what’s uh, you know, what, what’s good, what’s, what’s less good? What does that company care about? Yeah,
Alexander Ferguson 30:01
the the, the description of the coding and the spreadsheet still makes sense. Did you say does the is it able to take the the efficiency of the design as well? Like, how did you create something that looks nice, go that far,
Omer Molad 30:13
no designs designs a lot more challenging, and probably the most subjective being but but we’ve had a lot of success with design roles typically. So the way a design assessment would work most commonly is there will be one max to chat, practical kind of assignments. And then there’ll be a whole series of questions, talking about the work or testing other things like knowledge and preferences of the candidate. So there might be like, design this and then recorded video talking about it. And it’s usually cumulatively, all the insight that we get from the entire assessment can get us pretty close, he’s pretty predictive of who’s going to be good. It’s very rare that you’ll have someone score very highly and then look at their work and it will be terrible, or vice versa. Because usually all the other things that that all the other sort of insight were gathering from the candidate will be consistent with with that, but design is an area where I would say there’s definitely an element of human verification when it comes to subjectivity, particularly around brand design. So you know, I think we’ve more scientific design, like product design or user onboarding, it’s probably more mechanics. But if it’s something like brand and logo, there’s going to be you could have 10 different opinions, right? So it depends
Alexander Ferguson 31:50
the thing back to the soft, soft skills or, or resiliency, and the fascinating piece of someone is writing an open ended response or just recording a video and then you’re able to do to rate that, do you think people will start to figure it out? Like I’m trying to figure how to best describe this, just as you know, people are constantly hunting now, SEO and trying to find the best way to to rank. I kind of feel like this will be the future that more and more companies will go to these automated systems because it makes sense. And it’s this is a two part question. It’s like one is How does someone succeed in it? But also how could it be gamed? And how are you preventing that. So the best way to succeed
Omer Molad 32:36
is to actually be yourself. And if you’re confident in your ability to do the job, then this is a great platform for you to showcase that and not be disqualified unfairly. There are. And I think for questions that are more subjective, you can’t it’s impossible to gain it. Now what can be gained is for questions that have a right or wrong answer. And that’s why we don’t do anything out of the box. So when you look at folks that are doing survey style, multiple choice, well, it’s only a matter of time before those questions appear on Google right. Now, we’ve got a question bang feature. So we could have, for example, for a given assessment, 50 questions, but it chooses randomly it chooses 12 Each time for each candidate. So they each get different questions. So even if they posted the assessment on the internet, you know, the next question will get different questions. The next candidate, I’m sorry, will get different questions. But I mean, even that can be there’s a point where you can you know, so. So at the end of the day, when you’re asking someone to do real task, as opposed to just knowing an answer, how do you gain that? How do you get that unless you’re like, committing fraud, like literally paying someone to do it instead of an arrow? There are a ton of ways to prevent that. But that’s a different now you’re getting into like proctoring, and IP detection, and you know, that that’s actually fraud. Okay, so just just to be clear about that. Whereas if someone prepares themselves, to the point where they actually can perform the job, that’s not gaming, they’ve upskilled themselves, good for them. Right. So someone practiced and got to the point. I mean, if you look at, say, case study style interviews that McKinsey in the management consulting firms, so if someone practices a lot to the point where they do really well, well, that’s good, that that didn’t gamer they performed, they prepared themselves, well. Now, it’s highly unlikely that someone’s going to get to the point where they can like fake being an ER nurse. I mean, that’s just not right. It’s gonna take you years of study to get to that point,
Alexander Ferguson 34:44
but they’re not faking it anymore, because you studied it.
Omer Molad 34:47
Yeah. So the point is that by putting people in scenarios that are job specific, and by making it open ended and subjective, rather than kind of right or wrong, it’s very hard gain.
Alexander Ferguson 35:03
I cuz I, I can imagine a lot of folks that could be, I don’t know, I don’t have concern is the right word. But they’re wondering all as an employee or potential hire, everything’s getting automated, how do I know I can succeed in these new systems because I see both sides, I see the employer, or the team is wanting to hire, this could be a great tool to automate and streamline. But from the employee side, it’s like, it can be scary.
Omer Molad 35:29
I hear you, when eBay launched, I was worried about putting my credit card on the internet to buy something as soon as everyone else. And now we have, you know, there’s probably like 1000 organizations that have my credit card, and you can tap your card without even entering a, you know, up to $100, we have paid pass, and you just tap it and pays. And as soon we’ll be able to like Blink, and we’ll pay and we’ll end up buying, you know, buying things. So I think there’s a natural let you know that there’s a natural sort of evolution to get to a point of maturity, where people with some of these anxieties are alleviated. I think until then the onus is on us to communicate well, and to demystify and to help both companies and candidates understand this isn’t voodoo. Don’t get caught up in some article you read on Business Insider that AI is taking your job, or automating HR all is that’s just nonsense, right? So, you know, it’s not helpful to generalize about all these things, what we do is we put you in the scenarios you’d face on the job which, which if you if you’re confident your ability to do the job, you should, you should be thrilled about because we actually help you showcase your talent, you 100% of candidates get an opportunity to showcase their talent without being unfairly screened out of the process. So as a candidate, I’d be, you know, jumping up and down with joy. We don’t eliminate anybody, we present candidates in a ranked order, based on how well they perform those tasks in accordance with the criteria that the company sets sits, right? There’s no, there’s no, we’re not hiding anything. There’s no black box. Our clients, when they ask us how it all works, we sit down and explain it to them, we can reconcile the grading for them. We don’t say it’s a secret, we we show them how to teach how to train the models. And and so sure, I can understand that if you’ve never used technology before and you use you’re doing the sort of resume interview game thing, then this is a big leap. But you know, storing data in the cloud, that was scary to everyone wanted to have servers on site. And we overcame that. I think AWS will, if you look at their revenue, see that they show it that way. And so I think there’ll be a natural evolution where people understand that, you know, these things are largely okay. And also, on top of that, you know, when I read about I always find it fascinating when I read about all some, you know, there was this article about Amazon that they had some AI that they cancelled because it because it I think it de prioritized we met or, you know, try it. And of course, everyone says, Oh, that’s terrible. But humans have been doing that. For years, humans have been de prioritizing women for years. And nobody said anything about it the way it was the article on that, right. So it’s like, I don’t know why people think that humans made better decisions than computers. That’s categorically incorrect, with terrible decision makers. Were right. And so, you know, but we get really upset when an algorithm makes a decision. And it’s unfair. Well, we taught the algorithm, humans taught the algorithm the algorithm didn’t grow. It’s not Skynet. Right? So So Terminator reference for anyone wondering what I’m talking about. So, you know, I think,
Alexander Ferguson 39:03
what’s the solution? And like, how do we how do we make sure we’re progressing in the right direction? If you say we’re building these, how do we make sure we’re not deleted
Omer Molad 39:11
by us, we get better. We get better over time. That’s why everything we do is, it’s what’s called machine learning. It learns, we get better. It’s like everything with technology. It’s not perfect. In the beginning, when the first car that we made wasn’t it was crap. Now we have better cars. Now. They’re basically almost silent. And some of them don’t even use fuel. And soon they’ll drive themselves. Now. The first autonomous vehicle that runs over a pedestrian, oh my God, just wait for the head. That’s going to be but how many but how many people run over pedestrians with a human operated vehicle and that’s fine. We’ve accepted that it’s factored in. So we will get better over time. There’ll be acceptance over time. It’s inevitable that computers are going to play a bigger and bigger role in hiring humans. They don’t have the time. And they don’t have the discretion. And that’s just a fact. And and people will get mad at get upset at me saying that, but there wouldn’t be so many solutions. Like as if humans were great at hiring, we suck at it. We suck. turnover is ridiculous bad hires, the number of bad hires is insane the cost of bad hires, it’s in the trillions, right? And there’s a reason for that. Now, I’m not claiming that like software is the answer to everything not okay, I’m not claiming that AI is the answer at all. But we will get better, we will continue to automate the things that humans aren’t good at, we will continue to use data to give people insight and we will continue yes to make mistakes. And people will create headlines from those mistakes and we will overcome those mistakes and learn and get better we get better.
Alexander Ferguson 40:59
I can I can tell your your your passion Omar, when you get into this and like you fully believe that software should be able to help that there’s there is a solution that you shouldn’t be afraid of machine learning. And I appreciate that view. The the the proof is always in the pudding. So as the more people try it, the more it learns, the more you can improve. And if I, if anything, I’m excited, but it’s the balance of the two. So it but if you had to make a prediction two, three years from now, where do you think we’re what are we going to see? Is it going to be mass adoption? Is or is it going to take another 10 years before more hiring is automated?
Omer Molad 41:39
So rather than trying to predict what we did was we tried to measure because we also don’t, you know, we also had a thesis that this method that I’ve described you that we use of bringing someone in for a shift the job trial, but it’s better, but at the end of the day, people will say, okay, but is it better? And there’s only so much marketing and spin that people have patience for? So we said okay, well, you know, we need to prove it. There’s a portion of the market that will adopt based on the idea and the rest of the market, that early majority, late majority, and laggards will only adopt when there’s proof. So we said okay, let’s get the proof. So what we do now is we we ask people when they’ve made a hire on our platform, if you made a hire, through ghire Glico. Hi, John. Hi, Betty, a guy who was the hiring manager. And then we asked for the name and email, what we then do is 120 days later. So we allow for like up to 30 days of transition, and then 90 days on the job. That’s the sort of how we got to that. We survey the hiring manager. And we say, hey, hiring manager, Sarah. Hi, Sarah, you hired John, few months ago? Would you hire John again, knowing what you know now? Can you rate John out of 10? And is there any other feedback you want to give us and we do that for everybody. And so and that’s the only thing that matters, all the efficiency and saving time and confidence. That’s terrific. At the end of the day, either John was a better employee or not, right than what you would have otherwise had. So we get that data, plus a lot of other data. And a lot of other conversations. This is an automated survey, and we learn from our customers. And what we’ve discovered is that the people being hired through our platform are performing better than the people not hired through our platform are staying longer, and are less likely to be bad hires. Okay. And so what we can then do is say, Okay, now let’s look at the top performers. How did they score in the assessment? What skills were they good at? Let’s index up for that. So we know that in sales, people who are resilient, do well, and all these kinds of things, and we can start building out a taxonomy. It’s important to mention that because you’ve been asking about how do we know we’re not being, you know, hoodwinked by all this? Well, that’s how we know we know that it works. Now, when you ask me for my prediction. I think if we can get people to the point where we give them confidence, we move away from the noise and we give them confidence that it’s not about AI or automation or any of this stuff. It’s about, we’ve got a method that works that gets you better employees in less time. It’s as simple as that. Then, commercial logic would suggest that, sure, there’ll be speed bumps or weed, people who resist there’ll be also but eventually, like commercial logic would suggest that it makes sense to go down to go down this path. Now what we saw in the pandemic was this huge kind of shift to boards, you know, online, online working online hiring, not just zoom, but also hiring remotely. And, you know, online interviewing people had to, they had no choice. So the vast majority of people that are hiring, that they’ve taken one small step, whether they use the internet, okay, they’re not, they’re not necessarily doing it in a task based way, like we would advocate, but they come one step, taken one step on that on that journey. So that helps us and now we can help them take the rest of the steps and move away from on resumes, and unstructured interviewing and pour signals that are not predictive to sort of skills, a skills based hiring, I can’t tell you, if it’s 12 months or 10 years, I can tell you that there’s rapid adoption happening already. And that the majority of the markets already on the internet in some shape or form. So that’s encouraging. And I can tell you that what we do works, and we have the proof. So when I put all that together, that makes for a very bullish outlook, and the rest is really up to us to you know, sell the message and convince people that that this is going to make their lives better than what they were doing before make their job easier.
Alexander Ferguson 46:22
Based in in Melbourne, do you see any difference based off of where people live? As far as who you hire how you hire? Is there a difference in in culture and process as you start to work with different areas?
Omer Molad 46:41
Um, some people like to think so I would say not much, I think so, you know, 70% of our client base. So we’re in 70, plus countries, and, you know, a small part of our client base is here in Australia, but 70 plus percent is outside and our biggest countries, the US. So we we work everywhere, and we see clients everywhere, and candidates everywhere and multiple languages. And you know, there are new answers. But at the end of the day, I think the differences are small at the you know, particularly around when you’re talking about English speaking countries. There are local factors like stimulus and the government, seniors and lock downs and government government, some behave differently, but the private market largely behaves in the same way. So we see the same challenges with companies in the same challenges with candidates, all the themes you’re hearing about in the US around the great resignation and remote work and they’re everywhere. It’s the same, it’s it’s the world’s flat. So we don’t see huge differences in how people are thinking about talent, or how individuals are thinking about de curries across the world that themes are common.
Alexander Ferguson 48:05
Do you see an increase of because of COVID? And this this change of mentality? Or maybe I don’t need to either a hire people locally or be as an employee have to work for someone locally? Do you see a shift? Definitely, I
Omer Molad 48:19
think there’s there’s a greater recognition that you can think differently about where people can be based. So there are multiple levels, there’s kind of this remote this distributed, right? So there’s, some people say, Well, you all have to be in this location, but you don’t have to come into the office. And there’s some people saying, You can be anywhere you can be, we’re going to have different locations in the world, but we’re going to have officers in each of them. And then then there’s like, well, you can be anywhere in the world and work from your couch and that kind of thing. Now, it’s funny, we kind of find it all pretty fascinating, because we’ve been globally distributed from day one. And people said that, I mean, there were VCs that said, You’ll never succeed. You need everyone in an office in San Francisco. I’ve got an in writing. It’s pretty comical today to say that kind of stuff, but that we were literally told those kinds of things. You have to have an office with the sign out the front, and a meeting room. And in the dress, and like, you know, there are wounds that anymore. There were companies worth hundreds of billions today that have never people never met each other and never been in office. Right. So we know that that’s possible. That’s not to say there aren’t benefits to being together physically. And I think a lot of us are still figuring that out. But there’s that there’s definitely a shift 100% shift.
Alexander Ferguson 49:44
Is there any soft skills that you’re testing for that actually benefit a distributed team or remote team?
Omer Molad 49:53
Yeah, and you know, we’ve even found with our own hiring with come across some people who are, would otherwise be great, but not really suited to sort of autonomous working. So I’ll give you an example someone who needs a high degree of structure and prescriptive management style. And doesn’t like ambiguity or sort of chaos, they’re going to struggle, they might struggle in a remote working environment, because they need structure in there. So one of the challenges is, you know, in working remotely as you need to structure your own day, and, and your interactions, the sort of, you know, the in office, workday is highly structured, because you sort of tend to come in, when everyone else comes in, there are meetings, and then there are these fleeting interactions where someone swings past your desk, when you work remotely to the opposite. There’s no structure, and there’s no fleeting interactions, you have to generate the interactions. And also you have to make choices email, Slack phone, zoom, ad, you know, how, what method do I choose? And so people have to be quite deliberate about that style of working, for example, you’re not seen, and so or observed, and so some people freak out that will Nobody knows if I’m doing anything or vice versa, like other people that report to me doing anything, are they just sitting at home and watching TV? So you can’t, that kind of stuff is is problematic in in, in remote working, which, which requires a high degree of trust, autonomy, independence, so you can test for those things? And absolutely, you can’t. And so, for example, what kind of management style does someone respond well, to highly prescriptive? Or do they need a high degree of autonomy and, and freedom do they thrive when they have freedom, and I’ll just report to you and some people that can’t do that they can’t work autonomously, and then just ring the bell, if there’s an issue, or they know that they need to be told what to do all the time. And that’s more difficult when you’re not sitting next to each other. And so you can test for all those things. And it’s important when you’re hiring people remotely, or when, when they’re working remotely.
Alexander Ferguson 52:22
So definitely for those that are listening, your next hire, if you are distributed, now, you’ll find that on a resume, necessarily, but it’s finding a way to test if they’re good at you want and
Omer Molad 52:34
and and what will happen is the people who struggle in that environment, they’ll leave because there’ll be lonely and they’ll feel a strong sense of isolation. And, and they won’t, they won’t thrive and do their best work. And they won’t be good at communicating what they’re doing or their problems. And it becomes now you can you can teach these things. But you can also test test for these things. And we, you know, it’s also about employee onboarding and setting expectations. We’re how do we work in this company. So when when, when people have never worked remotely, and they join a company that’s working remotely, you shouldn’t assume as a manager that they’re just going to know what to do. It’s, it’s completely different.
Alexander Ferguson 53:18
If anything, getting these results back by them taking these tests, it doesn’t mean you don’t can’t hire them. But it gives you the confidence and awareness of what you might need to train them on.
Omer Molad 53:27
If you like them and everything else, you just train them on this. That’s a that’s a great point. And that’s and that’s also why you want to come to the interview, already knowing they seem so you can discuss them and say, Hey, here’s some things to think about when we work together, as opposed to asking resume questions during the interview to figure out like, Tell me about a time where you did this thing. You don’t need to do that anymore. You’ve already verified all their skills. Now you can meet them and say, Okay, you’re wonderful at the job. You’ve never worked remotely. Let’s talk about that. Let’s talk about how we can communicate with each other when we work together.
Alexander Ferguson 54:04
For those that want to learn more about vervoe You can head over to vervoeo.com. That’s VERVOE.com Thank you so much. I was great to have you on today.
Omer Molad 54:16
Thanks so much, Alex.
Alexander Ferguson 54:17
And we’ll see you all on the next episode of UpTech Report. Have you seen a company using AI machine learning or other technology to transform the way we live work and do business? Go to UpTech report.com and let us know