As artificial intelligence (AI) plays a larger role in decision-making across a wider range of industries, ethical concerns are growing.


Artificial intelligence, also known as AI, has been the driving force behind high-level STEM research for decades. Google and Facebook, as well as online retailer Amazon, were the primary sources of information about the technology's power and potential for the majority of consumers. Today, artificial intelligence (AI) is required in a wide range of industries, including health care, banking, retail, and manufacturing.

However, concerns that these complex, opaque systems may cause more societal harm than economic good have recently tempered the promise of these game-changing technologies to do things like improve efficiency, lower costs, and accelerate research and development, among other things. Private companies use artificial intelligence software to make decisions about health and medicine, employment, creditworthiness, and even criminal justice, with virtually no oversight from the United States government. They are not required to explain how they are ensuring that programs are not encoded with structural biases, whether consciously or unconsciously.

Its increasing popularity and usefulness are undeniable. According to a forecast released in August by technology research firm IDC, worldwide business spending on artificial intelligence (AI) is expected to reach $50 billion this year and $110 billion annually by 2024, even after the global economic slump caused by the COVID-19 pandemic has subsided. The retail and banking industries spent the most money this year, with each industry spending more than $5 billion. The company predicts that the media industry, as well as the federal and central governments, will invest the most heavily between 2018 and 2023, and that artificial intelligence will be "the disrupting influence changing entire industries over the next decade," according to the company.

"Virtually every large corporation now has multiple artificial intelligence systems and considers the deployment of AI to be a critical component of their overall strategy," said Joseph Fuller, professor of management practice at Harvard Business School, who co-directs Managing the Future of Work, a research project that examines, among other things, the development and implementation of artificial intelligence, including machine learning, robotics, sensors, and industrial automation, in business and the workplace.

The automation of simple repetitive tasks requiring low-level decision-making was widely assumed to be the future of artificial intelligence (AI) from the beginning of the field. However, artificial intelligence (AI) has rapidly advanced in sophistication as a result of the development of more powerful computers and the collection of massive data sets. One branch of artificial intelligence, machine learning, which is notable for its ability to sort and analyze massive amounts of data while also learning over time, has transformed a wide range of fields, including education.

Fuller explained that firms are now utilizing artificial intelligence to manage sourcing of materials and products from suppliers, as well as to integrate vast troves of information to aid in strategic decision-making. Additionally, because of AI's ability to process data so quickly, it is helping to minimize time spent in the costly trial and error process of product development — a critical advance for an industry like pharmaceuticals, where it costs $1 billion to bring a new pill onto the market, according to Fuller.

AI has a variety of potential applications in the health care industry, including billing and the processing of necessary paperwork, according to experts. And medical professionals anticipate that the analysis of data, imaging, and diagnosis will have the greatest and most immediate impact on patient care. Think about having the ability to bring all of the medical knowledge available on a disease to bear on any given treatment decision, as some have envisioned.

In the field of employment, artificial intelligence software culls and processes resumes, as well as analyzes job interviewees' voice and facial expressions, resulting in increased hiring and the growth of so-called "hybrid" jobs. Employees may be able to focus on other responsibilities if artificial intelligence (AI) takes over important technical aspects of their jobs, such as package delivery truck routing, rather than being replaced. This could result in workers becoming more productive and therefore more valuable to their employers.

'It's allowing them to do more stuff better, or to make fewer mistakes, or to capture their expertise and disseminate it more effectively throughout the organization,' said Fuller, who has researched the effects and attitudes of workers who have lost or are likely to lose their jobs as a result of artificial intelligence.

According to Fuller, even though automation is here to stay, the elimination of entire job categories, such as highway toll-takers who were replaced by sensors as a result of the proliferation of artificial intelligence, is not likely.

"What we're going to see is jobs that require human interaction, empathy, and the application of judgment to what the machine is creating [will] have robustness," he predicted. "We're going to see jobs that require human interaction, empathy, and the application of judgment to what the machine is creating."

While big business has a significant head start, Karen Mills '75, M.B.A. '77, former administrator of the United States Small Business Administration from 2009 to 2013, believes that small businesses have the potential to be transformed by artificial intelligence. With small businesses accounting for half of all employment in the country prior to the COVID-19 pandemic, this could have significant implications for the national economy over the long term.

Rather than causing problems for small businesses, Mills believes that technology could provide them with detailed new insights into sales trends, cash flow, ordering, and other important financial information in real time, allowing them to better understand how the business is doing and where problem areas may be looming without having to hire anyone, become a financial expert, or spend hours every week laboring over the books.

AI could "completely change the game" in lending, an area where access to capital is difficult in part because banks often struggle to get an accurate picture of a small business's viability and creditworthiness. One area where AI could "completely change the game" is healthcare.

As she put it, "it's much more difficult to look inside a business operation and understand what's going on" than it is to evaluate an individual.

For both would-be borrowers and lenders, information opacity makes the lending process laborious, as well as expensive. Additionally, applications are designed to analyze larger companies or those who have already borrowed, creating a built-in disadvantage for certain types of businesses and historically underserved borrowers, such as women and minority business owners, according to Mills, a senior fellow at Harvard Business School.

Although AI-powered software will pull information from a business's bank account, taxes, and online bookkeeping records before comparing it with data from thousands of similar businesses, even small community banks will be able to make informed assessments in minutes, without the agony of paperwork and delays, and without the fear that any inequity will creep into the decision-making process, similar to blind auditions for musicians.

"It all goes away," she stated emphatically.

AN OBJECTIVITY VENEER IS A VENEER OF OBJECTIVITY

Not everyone, on the other hand, sees blue skies on the horizon. Many people are concerned that the coming age of artificial intelligence will bring new, faster, and frictionless ways to discriminate and divide on a large scale.

As political philosopher Michael Sandel, the Anne T. and Robert M. Bass Professor of Government explained, "a part of the appeal of algorithmic decision-making is that it appears to offer an objective way of overcoming human subjectivity, bias, and prejudice." According to the researchers, "we are discovering that many of the algorithms that are used to determine who should be granted parole, or who should be presented with employment or housing opportunities,... replicate and embed the biases that already exist in our society."

According to Sandel, who teaches a course on the moral, social, and political implications of new technologies, artificial intelligence presents three major areas of ethical concern for society: privacy and surveillance, bias and discrimination, and perhaps the deepest and most difficult philosophical question of the era, the role of human judgment, among other things.

In his paper, Sandel writes, "debates about privacy safeguards and about how to overcome bias in algorithmic decision-making in sentencing, parole, and employment practices are by now familiar." He is referring to conscious and unconscious prejudices held by program developers, as well as prejudices embedded in datasets used to train software. "However, we haven't quite figured out the most difficult question: Can intelligent machines outthink us, or are certain elements of human judgment required in making some of the most important decisions in life?"

There's a lot of concern about artificial intelligence (AI). Fuller believes that the notion of suddenly injecting bias into everyday life is exaggerated. One is that people have always faced "all sorts" of biases in the business world and the workplace because they are filled with human decision-making. This has prevented people from closing deals, landing contracts, and landing jobs.

According to Fuller, when resume-screening software is calibrated carefully and implemented thoughtfully, it allows a larger pool of applicants to be considered than would otherwise be possible, and it should reduce the possibility of favoritism that can arise when human gatekeepers are involved.

Sandel, on the other hand, is not convinced. "Artificial intelligence not only replicates human biases, but it also gives these biases the appearance of scientific validity." It gives the impression that these predictions and judgments are based on objective criteria," he explained.

According to Mills, algorithm-driven lending decisions do have a potential "dark side" in the lending industry. The likelihood that machines will replicate many of the banking industry's past failures, which resulted in systematic disparate treatment of African Americans and other marginalized consumers, is "pretty high" as they learn from the data sets that they are fed is "pretty high."

"If we don't think carefully and thoughtfully about it, we're going to end up with redlining again," she warned.

Given that banks are subject to strict regulations and risk being held liable if the algorithms they use to evaluate loan applications are found to be discriminatory against certain groups of consumers, those "at the top levels" of the industry, according to Mills, who closely studies the rapid changes in financial technology, also known as "fintech," are "very focused" on this issue right now.

It's clear that they don't want to discriminate." They're looking to provide access to capital to the most creditworthy borrowers," she explained further. "It's also beneficial to them in terms of business."

OVERWHELMED BY THE OVERVIEW

Some believe that artificial intelligence should be strictly regulated because of its potential power and widespread adoption. However, there is little agreement on how this should be accomplished or who should be in charge of establishing the rules.

Therefore, companies that develop or use AI systems have largely relied on existing laws and market forces to keep them in line, such as negative reactions from consumers and shareholders or the demands of highly-sought after AI technical talent, in order to maintain their competitive advantage.

"There isn't a single businessperson on the planet who works for a company of any size who isn't concerned about this and trying to figure out what will be politically, legally, regulatoryly, [or] ethically acceptable," Fuller said.

Despite the fact that companies already consider their own potential liability from product misuse prior to launching a product, he believes that it is unrealistic to expect them to anticipate and prevent every possible unintended consequence of their product before launching it.

Few people believe that the federal government is up to the task, or that it will ever be up to the task.

Without real focus and investment, "the regulatory bodies will be ill-equipped to engage in [oversight]," said Fuller, noting that the rapid pace of technological change means even the most informed legislators will struggle to keep up. Requiring that every new product that uses artificial intelligence be prescreened for potential social harms is not only impractical, but it would also be a significant impediment to innovation.

The Harvard Kennedy School's Jason Furman, who teaches economic policy in the practice of economic policy, concurs that government regulators need "a much better technical understanding of artificial intelligence in order to do that job well," but he believes they are capable of doing so.

Existing organizations, such as the National Highway Transportation Safety Association, which oversees vehicle safety, for example, could handle potential artificial intelligence issues in autonomous vehicles rather than a single watchdog agency, according to him.

As a former top economic adviser to President Barack Obama, Furman said, "I wouldn't have one central artificial intelligence group with a division that does cars; I'd have the car people have their own division of people who are really good at artificial intelligence."

Even though keeping artificial intelligence regulation within industries leaves the door open to the possibility of co-opted enforcement, Furman argues that industry-specific panels would be far more knowledgeable about the overarching technology of which AI is only one component, allowing for more thorough oversight.

While the European Union already has stringent data-privacy regulations in place, and the European Commission is considering a formal regulatory framework for the ethical use of artificial intelligence, the United States has a history of being behind the curve when it comes to technological regulation.

In Furman's opinion, "we should have started three decades ago, but better late than never," he believes that a "greater sense of urgency" is needed to compel lawmakers to act.

According to Sandel, business leaders "can't have it both ways," refusing to accept responsibility for artificial intelligence's harmful consequences while also fighting government oversight.

This is because these large technology corporations are neither self-regulating nor subject to adequate government oversight. "I believe there should be more of both," he said, later adding, "We can't assume that market forces will sort it out on their own." As we've seen with Facebook and other tech behemoths, this is a costly mistake."

Last fall, Sandel co-taught "Tech Ethics," a popular new General Education course with Doug Melton, co-director of Harvard's Stem Cell Institute, in collaboration with the Harvard Stem Cell Institute. Students consider and debate the big questions about new technologies, including everything from gene editing and robots to privacy and surveillance, in the same way that they did in his legendary "Justice" course.

"Companies must seriously consider the ethical dimensions of what they are doing, and we, as democratic citizens, must educate ourselves about technology and its social and ethical implications — not only in order to decide what regulations should be in place, but also in order to decide what role we want big tech and social media to play in our lives," Sandel explained.

He believes that doing so will necessitate a significant educational intervention, both at Harvard and throughout higher education more broadly.

"We need to ensure that all students learn enough about technology and the ethical implications of new technologies so that when they are running businesses or acting as democratic citizens, they will be able to ensure that technology serves human purposes rather than undermining a decent civic life," says the professor.

Post a Comment

Previous Post Next Post

Contact Form