We trap ourselves into our own digital and social bubbles where we see and hear the same thoughts repeated, until we’re convinced that what we believe is the only thing that is true. While humans might not necessarily be swayed by deepfakes or fake news, they might be more inclined to seek out articles and opinions that affirm their worldview. Motivated reasoning is a large part of the reason our current cultural and political landscape is the way it is, according to Schwartz. “More than the quality of information and the quality of sources is the problem that when people want something to be true, they will try to will it to be true.” “This is the big problem with disinformation,” Christopher Schwartz, a cybersecurity and disinformation researcher at Rochester Institute of Technology and who wasn’t involved in the study, told The Daily Beast. Christopher Schwartz, Rochester Institute of Technology For example, if you believe that the 2020 election was stolen, you’re more likely going to believe a deepfaked video of someone stuffing ballot boxes than someone who believes that it wasn’t stolen. This speaks to a broader issue that lies at the bedrock of misinformation: motivated reasoning, or the way in which people allow their biases to perceive information. “It may be that deepfakes are a more powerful vehicle for spreading misinformation because for example they are more likely to go viral or are more memorable over the long-term.” However, she added that the study only took a look at short-term memory. “Our findings are not especially concerning, as they don’t suggest any uniquely powerful threat of deepfakes over and above existing forms of misinformation,” Murphy explained. This might suggest that tried-and-true means of misinformation and distorting reality such as fake news articles might be just as effective as using AI. However, the findings showed that when participants were given a text description of the deepfake, it did as well and occasionally better than the video. For example, 41 percent said the Captain Marvel remake was better than the original while 12 percent said The Matrix remake was better. Of this group, a good portion said the remake was better than the original. The researchers found that an average of 49 percent of participants believed that the deepfaked videos were real. Meanwhile, some of the participants were given a text description of the fake remake. Participants also watched clips from actual remakes including Carrie, Total Recall, and Charlie and the Chocolate Factory. This included Brad Pitt and Angelina Jolie in The Shining, Chris Pratt in Indiana Jones, Charlize Theron in Captain Marvel, and-of course-Will Smith in The Matrix. “Yes there are very real harms posed by deep fakes, but we should always gather evidence for those harms in the first instance, before rushing to solve problems we’ve just assumed might exist.”įor the study, the authors recruited a group of 436 people to view clips of various deepfaked videos and told that they were remakes of real movies. “We shouldn't jump to predictions of dystopian futures based on our fears around emerging technologies,” lead study author Gillian Murphy, a misinformation researcher at University College Cork in Ireland, told The Daily Beast. What we end up with is a complicated picture of the harms that the tech could create-certainly to be feared, but also replete with its own limitations. On its own that sounds like a bad thing-and it is! But that finding suggests AI deepfakes may be no more effective at spreading misinformation than less technologically complex methods. However, there’s one silver lining: the study’s authors also found that simple text descriptions of fake remakes also effectively induced false memories in participants. Some viewers even rated the fake remakes as being better than the originals-underscoring the alarming power of deepfake technology at manipulating memory. That’s what’s at the heart of a study published July 6 in PLOS One that found that deepfaked clips of movies that don’t actually exist caused participants to falsely remember them. With the proliferation of AI tools such as deepfake technology, there’s even fear that it could be used on a mass scale to manipulate elections and push false narratives. However, this very human phenomenon can also be easily weaponized against us to spread misinformation and cause real-world harm.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |