HR Management has over the past few decades graduated from being a single function to a body of complex and specialized functions with clearly defined and distinctive areas of focus. HR is being called on to focus primarily on strategic goals and to add increasing value to organizations. The other field that has become an integral part of business is technology. It is therefore not surprising that in HRs effort to become increasingly relevant, IT is being leveraged in the execution of the HR function in an increasing number of ways. This e–HR revolution has taken many forms, from applicant tracking systems, to machine learning in recruitment and selection to software driven onboarding and employee HR support. The consequence of this is that more and more HR activities are being executed electronically—by a computer instead of by a person.
As in all instances of change, we have to examine the impact of the change. The most immediate and obvious impacts are speed of decision making, reduction in administrative effort, increased efficiency and increases in information. It is this last “benefit” that I think may create an uneasy relationship between IT and HR.
In our digital and information driven societies any holder of significant amounts of data has to be careful about individual rights to privacy, the manner in which data is acquired, stored, interpreted and shared. Therefore HR may find that its role will broaden to include not only the management of human resource but the management of electronic resources and the development of policies around matters of ethics and compliance.
This increased use of technology in HR is fraught with potential sources of conflict for the simple reason that individual expectations are at times at odds with business needs. There is the question of the extent to which individuals carry their rights as citizens into their workplace. What is the extent to which individuals rights as citizens of a state can be encroached upon by the employer?
The right to privacy—data acquisition, storage and sharing
In Canada federally regulated employers are governed by the Personal Information Protection and Electronic Documents Act (PIPEDA), while various provinces have their own regulations governing provincially regulated businesses.
Based on PIPEDA, the general guidance is that an employer’s need for information must be balanced with the employee’s right to privacy. The guidance provided by the Office of the Privacy Commissioner of Canada is as follows:
- The employer should say what personal information it collects from employees, why it collects it, and what it does with it.
- Collection, use, or disclosure of personal information should normally be done only with an employee’s knowledge and consent.
- The employer should only collect personal information that’s necessary for its stated purpose, and collect it by fair and lawful means.
- The employer should normally use or disclose personal information only for the purposes that it collected it for, and keep it only as long as it’s needed for those purposes, unless it has the employee’s consent to do something else with it, or is legally required to use or disclose it for other purposes.
- Employees’ personal information needs to be accurate, complete, and up-to-date.
- Employees should be able to access their personal information, and be able to challenge the accuracy and completeness of it.
Now, to the more thorny issue of data interpretation—Is it possible for data collection and interpretation to introduce biases into the workplace? I argue that it most certainly can. The interesting thing about technology is that it is designed by humans, used by humans and interpreted by humans. So, the use of data is not always an effective means of removing biases and unfair practices from business processes, because the data is still being used by humans with biases. For this reason, HR practitioners, must still remain vigilant and ensure that organizational policies exist that will protect the values we work hard to create from being eroded within organizations.
To illustrate, many businesses have developed the practice of reviewing the social media activity of either prospective employees or existing employees for insight into things such as values, beliefs and even exit risk. In these situations is the organization not exerting its own biases by seeking to determine if individuals share their own values? Unless of course the intent is to identify a specific set of anti-societal or violence prone values, doesn’t this not screen for factors not specifically job related that may be impacted by factors such as race, ancestry, place of origin etc., which are protected grounds from discrimination?
Some may argue that this is not really a technology bias because a human is interpreting the information. However, even in the purest form of data interpretation devoid of human interference—artificial intelligence or machine learning, allows biases to creep into organizations.
“Any time you have a dataset of human decisions, it includes bias,” said Roman Yampolskiy, director of the Cybersecurity Lab at the University of Louisville. “Whom to hire, grades for student essays, medical diagnosis, object descriptions, all will contain some combination of cultural, educational, gender, race, or other biases.”
One only has to think about the autocorrect on your smartphone to understand this. If you type a particular word while messaging often enough, even if you enter the first two letters of that word, your phone assumes that’s the word you are using, even if that is not your intention. Machines read patterns and patterns are about frequency. Therefore machine learning software will naturally discriminate against minorities simply because by simple math their frequency of interaction will be smaller. For example, it was discovered that on LinkedIn, high–paying jobs were not displayed as frequently to women as they were to men. This was as a result of the way the algorithms were written. It was reported that the reason for this was that the initial users of the product features for these high paying jobs were predominantly male so the algorithm simply learned a bias.
It is not difficult to see then how recruiting software scanning resumes can end up delivering biased decisions to talent acquisition teams.
The solution to these problems, in addition to an awareness of the potential harm and the development of appropriate policy, lies in technoethics. “Technoethics is an interdisciplinary research area that draws on theories and methods from multiple knowledge domains (such as communications, social sciences information studies, technology studies, applied ethics, and philosophy) to provide insights on ethical dimensions of technological systems and practices for advancing a technological society. Technoethics views technology and ethics as socially embedded enterprises and focuses on discovering the ethical use of technology, protecting against the misuse of technology, and devising common principles to guide new advances in technological development and application to benefit society.”
The element of technoethics that I believe would be most useful to the HR function is that of organizational technoethics. Organizational technoethics is focused on how technological advancements impact the organization and the ethical issues that they create. While this article would not suffice as a complete analysis of organizational technoethics, it is important that HR practitioners engage in active dialogue and bring these issues of ethics in technology to the table within their organizations.