Google released a series of guidelines for creating "responsible AI"

There are so many artificial intelligence apps that make our daily life so much easier and more productive. At the pace of innovation, everything can be done with a single command.

AI is more accessible to a growing number of people all over the world, but as this technology creates new possibilities for improvement and daily help, its advancements also raise questions about how it works - for example, what problems it can cause if it is not developed responsibly.

As the name suggests, it is an intelligence created by man, but carried out by machines and that has, to a certain extent, the same abilities as humans: it learns, improves and is able to function in certain areas.

When we talk about artificial intelligence, two great schools of thought collide: those who think it is a tool, no more, and those who believe that it is only a matter of time before it becomes a threat to the human race.

As our AI capabilities and possibilities expand, tWe will also see that it will be used for dangerous or malicious purposes. That is why those who see this technology as a threat view it with suspicion and fear for the impact it can have on their lives. Some famous personalities like Elon Musk are among them.

Tesla and SpaceX boss has already warned more than once: AI will outperform human cognitive abilities. Musk believes that this technology will threaten humans in the future, especially in the workplace.

This is also why his company Neuralink is working on brain-machine interfaces that will be inserted into the skull to prepare humanity for a "fatal" future where robots will rule it. The truth is, there are some sci-fi movies that have scared people too, featuring dystopian futures in which AI gets to control humans.

Researchers say an AI is unlikely to display human emotions like love or hate and that there is no reason to expect AI to become intentionally nice or mean.

In this respect, , Google has been concerned about the danger that AI can pose when you don't develop with care and the way you interact with people. AI must learn like a human being, but remain efficient and not become a dangerous machine. Google has been a major player in the development of AI.

With its Pentagon research program, "Project Maven," the company has "trained" AI in classifying objects in drone images. In other words, it has taught drones to understand what they are looking at.

Google Now Says Artificial Intelligence Has To Live With Bias and the company wants to do something about it. To do this, Google has put in place appropriate programs on the subject of "Responsible AI".

Two of the fundamentals of Google's AI are "being responsible to people" and "avoiding creating or reinforcing unfair prejudices". This includes the development of interpretable artificial intelligence systems that put people at the forefront of every stage of the development process, while ensuring that unfair biases a human may have are not reflected in the results of a model.

According to this guideline, Google strives to develop artificial intelligence responsibly and establishes a number of specific application areas that it will not pursue, such as not implementing artificial intelligence in technologies that can cause harm or injury to people.

Google will endeavor to ensure that the information available through AI models be accurate and of high quality. Furthermore, technology "must be accountable to people, subject to human direction and control."

Artificial intelligence algorithms and data sets can reflect, reinforce, and reduce unfair biases. In this sense, Google will strive to avoid unfair impacts on people, especially those related to sensitive characteristics such as race, ethnicity, gender, income, nationality or political or religious beliefs, among others.

Source: https://ai.google


The content of the article adheres to our principles of editorial ethics. To report an error click here!.

Be the first to comment

Leave a Comment

Your email address will not be published.

*

*

  1. Responsible for the data: Miguel Ángel Gatón
  2. Purpose of the data: Control SPAM, comment management.
  3. Legitimation: Your consent
  4. Communication of the data: The data will not be communicated to third parties except by legal obligation.
  5. Data storage: Database hosted by Occentus Networks (EU)
  6. Rights: At any time you can limit, recover and delete your information.