Thứ Sáu, 29 tháng 7, 2016

Lying in the hiring process: What human resources needs to know

Lying in the hiring process: What human resources needs to know


 People lie all the time during the hiring process. It’s up to Human Resources and hiring managers to catch those liars. Where are those fibs being told — and how can you prevent them?
human resoureces learn to catch those liars

 

Resume lies


In this intense job market, it’s no surprise that many applicants exaggerate parts of their resumes to look more enticing to potential employers.
The concept is so widespread, however, that nearly half of all applicants admit to lying on their resumes.
That’s according to a 2009 study from ADP, which found that 46% of all applicants commit some form of resume fraud.
Where are those lies being concentrated? Here are the 10 most common lies on resumes, courtesy of Marquet International:
  1. Stretching work dates
  2. Inflating past accomplishments and skills
  3. Enhancing job titles and responsibilities
  4. Exaggerating educational background
  5. Inventing periods of “self-employment” to cover up unemployment
  6. Omitting past employment
  7. Faking credentials
  8. Falsifying reasons for leaving prior employment
  9. Providing false references, and
  10. Misrepresenting a military record.

Interviewing lies


Your job would be a lot easier if you could easily spot those resume lies and nix those candidates from consideration.

But no matter how clued in you are to what applicants fib about, you’ll still inadvertently bring many of them in for interviews.

That’s when your skills at judging character come in. So who’s the best at screening potential talent? Is it someone who’s skeptical and suspicious about most applicants, or a person who’s trusting?

If you guessed that skeptical managers would do a better job, you’re not alone. You’re also wrong.

That’s according to a recent study from psychologists Nancy Carter and Mark Weber, which was recently highlighted in The Washington Post.

A large majority (85%) of participants said a skeptical interviewer would do a better job spotting dishonesty in job interviews.

But a subsequent study found that people who trust others — or who assume the best in other people — are the best at identifying liars.

How’s this so? On human resources expert explains:

… Lie-detection skills cause people to become more trusting. If you’re good at spotting lies, you need to worry less about being deceived by others, because you can often catch them in the act.

Another possibility: People who trust others become better at reading other people because they get to see a range of emotions during their interactions. That gives them more experiences to draw from to tell when someone is lying and when someone is telling the truth.

Human resources leaves employers with some advice on who they should have in the interviewer role to prevent applicants from duping you into hiring them:

Human resources expert - we need leaders who demonstrate skill in recognizing dishonesty. Instead of delegating these judgments to skeptics, it could be wiser to hand over the hiring interviews to those in your organization who tend to see the best in others. It’s the Samaritans who can smoke out the charlatans.
Of course, faith in others can go too far. It’s important to sprinkle a few ounces of skepticism into each pound of trust. Ultimately, while the best leaders don’t trust all of the people all of the time, the keenest judges of character may be the leaders who trust most of the people most of the time.
Source:http://www.Hrmorning.Com/

Teaching Machines to Think About HR

For all its promise for HR, big data and its “machine-learning” component still only give us facts about, and factual relationships within, our workforces; not conclusions based on the statistical analyses HR has always needed—and always will—to make meaningful predictions.

By Peter Cappelli

Workforce analytics. Big data. Machine learning.

The above terms—or “buzzwords” if you don’t like them—are popping up in many discussions of human resources, mainly involving vendors with solutions that make use of new data aimed at answering traditional workforce questions. Is there anything really new in these approaches, and, if there is, should we be paying attention to it?

The answers are yes and yes.

Let’s start with the term workforce analytics. In some ways, this term is to traditional measures of outcomes as human resources was to personnel. It is about addressing traditional, evergreen questions in different, more sophisticated ways.

Workforce analytics describes an effort to use data and sophisticated analyses to address HR problems. The most topical ones at the moment are: “Which candidates will make the best hires?” and “Which employees are most likely to leave?”

There is nothing new about those questions, and there isn’t much new about how they are being approached. The novelty comes, in part, from the fact that, after the early 1980s, big corporations gave up trying to address these questions in a sophisticated manner, so most people in business aren’t aware that similar approaches were tried a generation or so ago.

But there are some differences. One is a greater interest in analyses pertaining to financial outcomes: e.G., It will save us $5,000 per employee by reducing turnover.

The second difference involves the type of data available. The “Manplan” program in the 1960s required HR staff to read information about an employee from one file, mechanically punch it onto a card, then get different information for that same employee from another file, punch it onto a different card, then do that for every employee they wanted to study. Only when those steps were completed could they start looking to see what factors predicted turnover. It cost a fortune to look at even a small set of employees.

Virtually all HR data now is kept electronically, and, in most companies, the information on every applicant who tried to get a job with them is sitting somewhere in a dataset. It’s much easier and cheaper to look at huge numbers of observations, which makes it much easier to find potentially useful results. Being able to capture every “hit” on your employee-benefits website, for example, can tell us almost instantly what kinds of employees are worried about what types of issues.

But here’s the brake to the Big Data bandwagon: Not all HR data is big. The key piece of information needed to make workforce analytics valuable is a measure of job performance. We can’t say anything about which of our 1 million applicants will do well without being able to identify who among our employees is a good performer. In most companies, that information is no better than it was in the 1950s, and, in many companies, it is actually worse, as we’ve gone from assessment-center scores to a supervisor’s guess about potential. The phrase “garbage in/garbage out” is highly relevant here.

That takes us to the last and most obtuse of the buzzwords: machine learning.

It is a different way to think about data than most of us have previously seen, one that came from people whose expertise was rooted in computers rather than statistics per se. The “learning” idea comes from the fact that computer programs (i.E., The “machine”) can be designed to look at data and find patterns that allow them to make predictions.

How exactly machine learning differs from statistics is a topic of endless fascination to people in those two fields, but for the rest of us, here’s what matters: The traditional, statistical approach to analyzing HR data begins with hypotheses that come from prior research. It includes careful statements about assumptions in the study, or studies, and whether those assumptions are true. Traditional machine learning, in contrast, is theory-free and assumption-free. It just looks for patterns in the data, and it uses different techniques from what had commonly been used in statistics to find the clearest patterns.

A statistical examination of whether a given employment test predicts good hires concludes with either “yes” or “no,” where "no" means we can’t rule out that the relationships were due to chance.

In the case of a machine-learning examination, an employee might instead conclude that, while there is no overall relationship, there is a very powerful relationship for this subset of employees, nothing much for that subset, and, for a third subset, a strong relationship that was different than that of the first group.

The power of machine learning comes from the fact that it might well find important predictors that we never thought of before because prior theory didn’t include them—e.G., The distance an applicant’s home is from the work location predicts turnover—and wasn’t particularly adept at “mining” through lots of seemingly unrelated data to find predictors.

All that sounds really promising for machine learning, but there are a bunch of things about this approach that we’d better consider pretty carefully before diving in.

The first of these is a reminder that machine learning produces facts, rather than conclusions. It tells us “X is related to Y,” but not why they are related. Without hypothesis testing and clear statements of assumptions, we don’t learn much about what a given relationship means or why it exists. Perhaps most important, machine learning can’t tell us much about the likelihood that a relationship observed in this dataset will be useful in another context.

This matters in HR because most of the frameworks that support modern employment—especially legal frameworks—relied on the scientific method and traditional statistical tests for their foundation.

Consider, for example, the legal norm that selection tests should not discriminate and the “Uniform Guidelines of Employee Selection Procedures” put together by psychologists over the past generation to ensure that hiring practices are both valid and don’t discriminate against protected groups. Machine learning, as traditionally practiced, would surely uncover relationships that, if applied to hiring, would violate the law.

Machine learning applied to big data will certainly turn up a lot of interesting facts for workforce analytics to ponder. Transferring those facts to practice, however, is still a big leap. On their own, these “buzzword” approaches won’t get us there.

Peter Cappelli is the George W. Taylor Professor of Management and director of the Center for Human Resources at The Wharton School of the University of Pennsylvania in Philadelphia. His latest book is Why Good People Can't Get Jobs: The Skills Gap and What Companies Can Do About It.

Hreonline.Com/HRE/view/story.Jhtml?id=534358638

Không có nhận xét nào:

Đăng nhận xét