Is AI swamping, and the algorithm is the final solution for potential bias?

There is no prejudice in artificial intelligence. It does not "think" that something is true or false because it cannot be explained by logic. Unfortunately, from creating algorithms to interpreting data, there is human bias in machine learning, and until now, almost no one has tried to solve this serious problem.

On Tuesday, foreign media said that AI, founded by former Google Chief Technology Officer Varun Kacholia and Facebook's former search engine engineer Ashutosh Garg, recently completed $24 million in financing for Lightspeed Ventures and FoundaTIon Capital.

Is AI swamping, and the algorithm is the final solution for potential bias?

This is a startup that aims to solve the information gap between employment, job search and promotion by recruiting information about the labor force around the world, and recruiting discriminating issues. It relies on self-developed software to collect and process personal information of candidates and candidates. The intelligent system's processing alleviates the problem of information asymmetry, the matching rate is eight times higher than the traditional recruitment, and it also saves 90% of the screening cost.

It is understandable to apply big data and algorithmic automation decision-making to the selection of labor. The huge data foundation can also improve the efficiency of decision-making. But must the results of the algorithm be biased? In this regard, Gary said: "People are also biased in the recruitment process, because the information obtained by individuals is limited. The data algorithm provides sufficient information and insights for recruiters to make up for recruiters who may not understand Errors in these skills or companies, which significantly increase the number of qualified candidates."

According to the company, the product screening mechanism will eliminate any potential human prejudice, so that it meets the requirements of the Equal Employment Opportunity Commission, age, gender, race, religion, disability, etc. will not become the reference standard of the algorithm. There are merits in eliminating people's inherent stereotypes and making personnel decisions less "private", but only if the decision-making system itself is not affected by these prejudices. The supervision and correction of the algorithm will inevitably become the top priority of the algorithm.

In fact, for the prejudice of artificial intelligence, in 2017, an article from the MIT Technology Review published a comment on this issue.

“At the critical moment in the development of machine learning and artificial intelligence, algorithmic bias is becoming a major social problem. If potential bias in the algorithm leads to important decisions that are not recognized and uncontrolled, this can lead to more serious negatives. Consequences, especially for poorer communities and minorities. In addition, the final protest may hinder the progress of an extremely useful technology."

Algorithmic expert Kevin Slavin also said in a TED speech that the algorithm "refines from the world, from the world", but now "began to shape the world." In the era of the algorithm "shaping the world", we should think about how to break through the bottleneck of the algorithm and give AI a positive value.

1. The algorithm is actually not objective

In our understanding, the biggest advantage of the algorithm is that it can realize intelligent and accurate recommendation based on the user's "digital self". In other words, the algorithm is a fast channel for people to find the materials they need in a large amount of information. The realization of this process is also based on people's trust in the algorithm, that is, it has "objectivity."

Is AI swamping, and the algorithm is the final solution for potential bias?

However, people have forgotten that the AI ​​algorithm and its decision-making process are shaped by developers. The code written by the developer, the training data used, and the process of stress testing the algorithm all affect the choices after the algorithm. This means that developers' values, biases and human defects are reflected in the software.

Just like the “Cambridge Analysis Scandal” incident that Facebook has been unable to do, using advanced computing technology or AI technology to try to manipulate elections through people's private data, the essence is the basic data ethics. Every company has its own set of algorithms because they all have different purposes and values. When we get the information, we feel that we have the right to make choices, but in fact, all the options are the given options given by the algorithm. In this case, the algorithm is not objective.

2. Learn to save yourself without being objective

The questioning of the algorithm has existed since its birth, and this questioning reflects the scientific rationality of mankind. While continuing to improve the design of the algorithm, it is also necessary to learn to save yourself. In other words, we must learn to protect ourselves.

As far as the whole situation is concerned, the biggest problem with the algorithm is its opacity. For this complex field, professional technicians have not been able to figure out and understand, let alone ordinary people. Therefore, in the case of uncertainty about its design philosophy or operational logic, what we need to do is to clarify the concept that “the algorithm is not objective” and always be wary of its limitations.

Perhaps, at this time, the more radical mode of thinking is more popular, we must learn to ask questions, understand the role of the algorithm and its original design purpose from the questions. For example, using traditional web pages to browse news, try not to rely on smart search, although it may not be successful, but still learn to narrow down the information that may be brought about by its own logic confrontation algorithm, so as not to be limited by the algorithm.

3. How to reduce the prejudice of artificial intelligence

As for how to reduce the prejudice of artificial intelligence, Microsoft researchers said that the best way is to start from the data of algorithm training, which is an effective way.

Is AI swamping, and the algorithm is the final solution for potential bias?

The data distribution itself has a certain degree of bias. In the case of the US presidential election, the distribution of US citizen data in the hands of developers is not balanced. The data of local residents is more than that of immigrants, and the rich are more than the poor. This is a possible situation. The imbalance of data may lead to erroneous conclusions about the composition of society, such as the analysis of machine learning algorithms, which leads to the conclusion that "most Americans are rich whites."

Similarly, studies have shown that AIs used in law enforcement agencies tend to target black and Latino residents when detecting photos of offenders appearing in the news. In addition, there are many other forms of bias in the training data, but these mention more. However, training data is only one type of review. It is also important to find the prejudice of human existence through the “stress test”.

In fact, to make AI unbiased, we must be brave to uncover the "black box" of the algorithm. Fast-acting CEO Su Hua once said that if there is no good understanding of society and thinking about humanities, technology alone will be easy to go wrong. It is necessary to use philosophical wisdom to amplify algorithms and technical forces to avoid expressing happiness. Various obstacles. Now, what we have to do is to do our utmost to avoid these things happening.

AC Gear Motor

Ac Gear Motor,Ac Geared Motor,Asynchronous Ac Motor,Asynchronous Ac Gear Motor

NingBo BeiLun HengFeng Electromotor Manufacture Co.,Ltd. , https://www.hengfengmotor.com