AI-Generated These Extraordinary Images: Here Are The Reasons Why Experts Are Anxious

In recent times, various individuals have articulated their opinion about how AI has been capable of producing frightening and impressive images as well as live blackjack. Some of these introduce capturing a vast variety of bears on the streets of Hong Kong and meatballs-produced and pasta-produced cats.

Extensively shared, these online images are considered strange as well as silly for most. One of the AI programs apt of producing these images is known as DALL-E 2. It was released last year and can include various things in the photos.

It is widely believed that these systems could eventually be applied in various creative industries, such as creating ads and art. One of these is known as Midjourney, which has been utilized to create covers for publications. Google and OpenAI noted that these systems could also be applied to edit images.

Unfortunately, both DALL-E 2 and Imagen currently are not accessible to the people. Despite their advanced capabilities, these systems still generate disturbing outcomes when it comes to analyzing and rendering images. For example, they are able to produce images that exhibit cultural and gender warnings.

Experts are anxious that these systems could potentially have detrimental effects on the development of social media platforms. They believe that their unlimited nature could allow them to produce images that are imaginative. They also fear that this could lead to the automation of the creation of inappropriate and injurious stereotypes.

Despite the alarming outcome of these AI programs, experts are still not ruling out the possibility of using them in real life as they are focused on making sure that the machines are safe. For instance, facial recognition programs have been criticized due to their accuracy.

In response to the concerns about the potentially harmful effects of the AI programs, Google and OpenAI issued a joint statement to address the matter. They noted that these systems could still perform gender and racial warnings.

In the statement, the 2 companies noted that while AI systems can still perform these tasks, they noted that they could also be used to generate biased results by providing text prompts.

In February, Lama Ahmad, OpenAI policy research manager, announced that the company was going on to work on improving its AI system’s capabilities when it comes to performing tasks that are related to bias. Through a study conducted with experts, the company was able to improve the performance of DALL-E 2.

In a research paper released earlier this year, the researchers noted that Google’s Imagen system could potentially produce harmful stereotypes by analyzing and rendering images. They noted that the program could also produce images that display, with whiter skin shades, women.

The San Luis Obispo University scientist, Julie Carpenter, noted that the task complexity that she and her colleagues carry out when it comes to image processing systems is stark.

As a scientist, she noted that it is essential for humans to comprehend AI capabilities, so they can use them as a partner instead of potential threats. Unfortunately, it is still not possible to see the programs’ effectiveness in reality.

Despite the advancements that have occurred in the field of AI, Michel is still worried that the schemes could still be in application to make harmful content. Deepfake is an incredibly advanced form of AI that was applied to make pornography.

Warning Tip

To prevent these programs from being applied to generate detrimental content, Google and OpenAI built a filter that only removed pornography from their set of data. The two companies intended to produce the program filter on data pairs including headlines and images.

Unfortunately, despite the success of the filter, researchers noted that it still came across racist and discriminatory content.

Due to the implementation of the filter, it has been reported that the amount of content about women in the set of data has decreased. For instance, when it comes to sexy content, the noted women in it are much more than men.

It is not possible manually to filter the data collected by image-processing programs. Because of the varying cultural beliefs of users when it is about deleting and labeling content, it can sometimes be arduous to implement a proper filter.

Despite the complexity of image-processing systems’ tasks, explorers are still attempting to diminish the quantity of data collected to prevent discrimination in their datasets.

According to Alex Diakis, a computer science professor, image-processing systems can also reduce the amount of data they collect by cutting off and revolving images.

Despite the advantages of this method, it is still not enough to eliminate discrimination. Instead, the specialists believe researchers should concentrate on the images that are in the set of data to filter, in detail, and avoid any racial and gender conflicts.

Currently, the two companies are focused on preventing the use of disturbing content on various online platforms.

DALL-E 2, a virtual project page, does not feature realistic portraits of individuals. Instead, it relies on advanced techniques to inhibit people from being depicted as if they are in real life, which could hinder its users from finding relevant images related to a particular topic.

OpenAI’s DALL-E 2 platform also has a comprehensive content policy that prevents its users from posting or sharing content that doesn’t comply with G-rated standards. It additionally utilizes various filters to stave off the creation of harmful content.

A couple of months ago, OpenAI revealed that it would be empowering users to post realistic images on its platform. However, it was only allowed after it added several safety characteristics. Some of these, for example, prevent people from sharing content featuring public figures.

Through its partnership with academic institutions, OpenAI is also working on improving the performance of its image processing systems.

Google, on the other hand, has restricted researchers from using its Imagen platform. According to the paper’s co-author, Mohammad Norouzi, the system would not display sensitive or graphic content.

A study conducted earlier this year revealed that Imagen exhibited various forms of discrimination. For instance, it showed multiple cultural and online biases.

On Google’s image-processing page, the system detected multiple forms of discrimination and it highlighted the instances it had identified.

On the other hand, the outcome of the image-processing system showed two royal-looking animals wearing Western-style garments. Although the prompt provided no details about the type of outfits the animals should be wearing, the images still featured them.

Frances
Frances
Skip to content