00:08 — This episode is brought to you by the Cloud Wars Expo. This in-person event will be held June 28th to 30th at the Moscone Center in San Francisco, California.
00:49 — Stevens Institute of Technology recently collaborated with Princeton University and the University of Chicago to create an AI model algorithm to predict how people will be perceived based on a photo of their face.
01:03 — Because AI models learn from humans and are built to mimic human behavior, the researchers developed this from a project where thousands of people responded with their initial impressions to thousands of computer-generated photos of faces. Additionally, the people were tasked to rank the images based on certain criteria.
01:37 — From this data and algorithm, the researchers are evaluating what kind of outputs the AI is generating from these judgments. However, this is still yet to have explainability built into it—essentially, it doesn’t have the ability to report how the AI model came to that conclusion.
02:11 — Although there are many possible outcomes, there are also some dangerous aspects, as this is built on mimicking human behavior. For instance, people would be unfairly dropped from consideration if this was used for job recruitment.
03:00 — The use of deep fakes and the tools to create them can manipulate photos. In an effort to address the issues that arise from this, the research team patented their algorithm with the goal to infuse ethical standards.
03:45 — The goal of this entire AI model is to help address unconscious bias and judgments based on stereotypes exist, and use the findings to educate others.
Looking for real-world insights into hyperautomation? Subscribe to the Hyperautomation channel: